FoodTracks builds web-based controlling solutions for bakeries with a focus on early problem recognition in sales and containing food waste. I've helped them build a pipeline that monitors and loads data from a wild bouquet of client-hosted ERP systems, securely gets it to FoodTracks' site, makes sense of it, normalizes it into a common format, enriches it with external information and custom data models, and finally serves it through a REST interface. As a spin-off of this pipeline we also developed a novel product to track inventory anomalies.
Scrapinghub is where I learned the ropes of software development, working autonomously (they're a fully remote 120-person company!), and helping junior developers grow. They provide web scraping and data extraction services, with an open source philosophy in their hearts. Among many other things I started and grew to four full-time developers a solution to collect the inventory of 20+ ticket vendors with single-seat resolution and multiple daily refreshes, managed and lead developed an invite-only SEO scraping product with a customer-facing API, and was lead maintainer of the command line client for Scrapy Cloud, their flagship product.
Before my career in software engineering, I studied the marvelous field of Physics, and the Max Planck Institute for Dynamics and Self-Organization is where I finished it off with my Master's thesis, for which I built and evaluated a novel experiment tracking 40,000 coalescing oil droplets on water with diameters ranging from 0.01 cm² to 1,000 cm². In the process I developed two open-source projects: one to obtain full-resolution images from consumer-level DSLRs at high frame rates not available in existing software solutions, and another to simplify and modularize the task of batch image processing, in my case using custom computer vision solutions to detect and track the oil droplets.