At UP42, we believe that by bringing people together, we can bring technology together, and better understand the world together. There are wonderful people here making that possible. In our Meet UP42 series, we sit down with them to give you a behind the scenes look at who they are, what makes them tick, and how it all works.
First up, we’re talking to Dr. Markus Müller, our Lead Data Science Engineer heading up the Data Science team.
What’s your role in data science at UP42? Could you walk us through a typical day?
As Lead Engineer, my role is a mix of technical and coordinating tasks. I like to start my days early so I usually show up at the office (or now, my home office) around 7:30 or 8 and try to do some deep work e.g. programming.
My team has a short daily standup where we coordinate with each other and identify any problems that need immediate attention. On average I would say that I have another couple of meetings every day that have to do with planning new features or cross-team communication. From time to time I am also getting drawn into customer conversations when technical expertise is needed. The rest of the time I spend on reviewing code or writing it myself.
What did your journey before UP42 look like?
It started when I finished my time as a development worker in Indonesia. I worked literally in the jungle of Borneo for five years giving GIS and remote sensing training and doing community mapping. Getting back into a “normal” career afterward is a bit challenging so I was very glad that I could land a job at Landcare Research in New Zealand where I did my Masters project quite a few years ago. My work there was a mix of scientific programming for other researchers and also doing some research myself and it was the first time I learned about all the cool data science tools which are available in Python and R. Especially exciting for me was everything related to Machine Learning that I used, such as boosted regression trees, random forests and support vector machines.
My next stop was Germany as I got an offer from Planet Labs as a geospatial software engineer in their office in Berlin. There, I focused on developing my software engineering skills, did a lot of image processing, for the first time some deep learning, but also program management for ESA-related activities.
In January 2019 I then joined UP42 as one of the very first permanent staff members. It was a great opportunity for me to develop and shape the data science topic in a cool, new geospatial product company.
I really like that I could join the company very early on and that I was able to watch its development, and contribute a bit to its direction, until now. It’s nice that all teams are so closely connected and I can see what is going on. I think it’s a rare opportunity to have this full transparency.
You’re about to publish a paper on super-resolution, could you tell us more about it?
Very glad to talk about this one! So super-resolution is a classical problem in computer vision. The aim is to derive a higher resolution from an image by using an algorithm. Almost all current approaches use Convolutional Neural Networks, so this also falls into the deep learning domain nowadays. The approach is also applicable to satellite images, with a few specific problems as they have more spectral bands for example.
We adjusted several state of the art super-resolution models to work with Pleiades imagery, trained the models leveraging the panchromatic band and compared the results.
What does this mean for our understanding of geospatial data as a whole?
These higher resolution images can then be used to help analytic algorithms such as object detection or image segmentation to improve their performance.
The reason for this is that almost all modern image analytics are done based on computer vision and deep learning and this means that the models do their work based on characteristics such as shape, texture and intensity. A super-resolution algorithm makes the shape of an object sharper.
For us humans, it becomes easier to identify a boat or even differentiate what kind of boat it is. The same is true for these computer vision algorithms, they can get a clearer picture of the object they are looking for. Other examples of applications besides ship detection are car or building detection. There are some papers that indicate it can also be useful for land cover segmentation.
What are the challenges you find yourself solving at UP42?
There are two major challenges, one is to make the data and processing blocks on our platform run smoothly and with each other. We integrate many different data sources and algorithms and it is really hard to make sure that workflows are reliable and always lead to the desired results for the user.
When it comes to the development of algorithms the second main challenge right now is that the best performing models are supervised algorithms and therefore need large amounts of high-quality training data—which is often not available.
This is a challenge for the overall AI community and many smart people are working on ideas to improve the situation. The next years will be exciting times.
In the current COVID-19 crisis what opportunities do you see to support?
I think the most important thing that everybody can do right now is on a personal level. The pandemic will not be over anytime soon, but we need to balance the need to “flatten the curve” with other necessities. As human beings, we need to interact with each other and as a society, we need a working economy. The only way we can balance these needs is to be reasonable and take all precautions we can, but at the same time going on with our normal lives.
UP42 also has a COVID-19 support initiative that pretty much nails what we can do: offer people who have ideas on how to help with the current crisis free access to data, algorithms and infrastructure.
What are you most looking forward to at UP42?
It will get really exciting once we have a good number of high-quality datasets on the platform and the ability to combine them. Analytics based on data fusion of data sources with different characteristics or by simply combining their spatial and temporal coverage will allow us to derive new and amazing insights.
What’s your favorite planetary perspective?
I am biased here. I am super proud of the work that my team did on Super-resolution so running a workflow consisting of Pléiades - pansharpen - super-resolution and then marveling at the results is my favorite thing to do.
 
Want to work with Markus? Check out UP42 careers here!