Toralf Scharf, Senior Scientist/Faculty Member at École polytechnique fédérale de Lausanne (EPFL) enlightens us on the challenges present when it comes to optical system design for modern optical technologies
Looking at the scientific community today, one gets the impression that optical system design starts a new era driven by the latest developments in information technology. In many discussions, the impact of design concepts based on artificial intelligence (AI), learning and big data are the main subjects raised. And indeed, such concepts will revolutionise the world of optical applications.
But what will be the impact of optical design? Do we still need to understand optics to engineer a complex system or is the software doing all the work for us by creating unconventional solutions that we never thought of? To understand the value of optical system design based in conventional methods, one must have a closer look at how problems are treated by utilising standard learning approaches in computer science.
A first thing to consider is the data volume, as well as the related information content. It is evident that in many situations, too much data is acquired to solve the problem and it is, therefore, very difficult to define how much information is really needed to treat a problem to satisfy the needs of the application. In optical design, the usual approach is to use selected or a spare array of rays, limited fields or specific aberration parameters and disregard other effects for the required optimisation.
When it comes to the approach of big data, this will be managed on a different level as more parameters can be introduced and sparse information processing and related optimisation algorithms might lead to faster results for a specific situation. Certainly, using more parameters will nevertheless not change the principle approach of an optical system design based on physical principles. This can only be done when learning approaches are introduced, based on feeding learning algorithms with information to learn.
An essential input is the ground truth that still needs to be provided either by basic principles or measurements of the highest quality. The machine learning algorithms use ground truth that has to be highly accurate for training purposes. Without such input information, the proper objective cannot be found. Providing ground truth is one of the major tenets of keeping the conventional optical design in operation, as it allows predictable and traceable design schemes, which together are an essential demand to measure ground truth data.
For optical design, there are two cases that are important: Machine learning based on commonly accepted standards (such as a point is imaged to appoint in a perfect optical system) or learning of functions based on examples, such as automated recognition of features and objects (for instance cancer cells in microscope images). While the first case touches the optical design principles itself, the second does not and only uses redundancy in data to process information for a selected application. A generalisation of such an approach that tunes systems for a certain application for general optics design will not be fruitful or possible. The more interesting case to be discussed is when one tries to introduce machine learning in optical system design based on basic principles.
So far, the step to this level is not done which might have different reasons. A machine learning model has difficulties to answer questions for its robustness and content, reproducibility and integrity, as well as precision and complexity. While analytical models allow you to determine what happens outside of your ground truth data, a machine learning algorithm will not allow such conclusions easily. Once an answer to a problem is found by learning, will there be a way to understand what and how the system has done the design? When does one reach the complexity limit when the data infrastructure to treat the problem gets so heavy that an analytical design might be easier found? How can one control tolerancing? Such questions need to be carefully investigated before a relievable scheme of optical design with learning can be established.
Machine learning for selected applications in optics nevertheless develops extremely fast and will be the standard of optical sensing very soon. But this is not optical design. It is the application of machine learning algorithms to problems, which have a high redundancy of information that can be neglected for the specific objectives. This approach will change the world in the sense that very cheap devices will flood the markets and allow extensive use of optics in applications that were only accessible with expensive specialised equipment previously. Now, expensive equipment is only needed to define the ground truth, measurement and evaluation for the selected objective which can be done with a simple system.
For machine learning in optical system design, the path will be less clear. This is also determined by the problem that fabrication specifications and realisation limitations have always been and need to be part of the design process. The compatibility of analogue values with digital sampling is not always easy to manage. In analogue fabrication, a major problem is the impact of the response curve of materials that is of course not binary.
What then is expected in the future for modern techniques in optical design? Within a short timeframe, machine learning will take over the field of optical application and sensing. Partially, this has already been done as we can extract today much more information out of images and illuminated scenes than we could a few years ago. The proof is depth recognition algorithms, based on single images or face recognition devices based on point cloud projection. This trend will continue toward sensing in biology and medicine.
However, the optical design of such a system is based on conventional techniques and does not use machine learning. The second step will be the application of machine learning in optical design. Already today, complex algorithms are used to treat problems with higher dimensionality and the tendency is clear: find optimal solutions with machine learning. Digital optics will be the door opener to this field and first examples show that tremendous increase in complexity can be handled when new tools for electromagnetic field calculation, when such an adjoint field method is applied, for instance.
Examples for diffractive optics, based on metasurfaces start to appear. The optimisation, however, needs more attention and will probably mainly profit from the application of learning algorithms. For analogue optics design, the situation is different. Raytrace methods of today are already a kind of sparse data technology as they use only selected information (rays) for optical design. It will be interesting to see in which direction research and application will push and what the first example will be that might lead one outside of the conventional design space. What concerns fabrication, the technology of freeform surfaces is ready to be applied to any challenging situation.
Toralf Scharf focuses his research activities at the École polytechnique fédérale de Lausanne on interdisciplinary subjects bringing micro-system, material technology and optics together. With a background in surface physics (MSc), physical chemistry (PhD) and a profound experience in optics he is familiar with all necessary aspects of technology development and application and can communicate with different scientific communities. In over 20 years of project execution with industry and governmental organisations, he has accumulated the right experience to lead and execute the project at different levels.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 675745.
Please note: This is a commercial profile
Senior Scientist/Faculty Member
École polytechnique fédérale
de Lausanne (EPFL)
Tel: +41 21 695 4286