nlathia.github.io

Home About Research Press & Speaking

Industry stories about machine learning

Last week, I went to the PAPIs.io Europe 2018 conference, which was held in Canary Wharf Tower in London. The conference describes itself as a “series of international conferences dedicated to real-world Machine Learning applications, and the innovations, techniques and tools that power them” (and, from what I gather, the name papis comes from “Predictive APIs”). I went down on the Thursday, the day that was dedicated to “Industry and Startups,” and took some notes on what I saw. Here’s a quick summary!

ML infrastructure with Kubernetes, Dask, and Jupyter

The morning keynote was by Olivier Grisel, who is probably best known for his immense contributions to scikit-learn — and therefore anyone who does machine learning in Python is indebted to him! His slides are online here

In this video, starting around 23 minutes in, he shows how to set up your own machine learning infrastructure using three main open source components: Kubernetes (a cluster orchestration system based on containers), Dask (a tool to parallelize python jobs that is deeply integrated with existing python libraries like pandas/numpy), and Jupyter Notebooks (the well-known web application for interactive development).

Specifically, he was using minikube to run kubernetes locally and jupyter hub to manage multiple users on a single jupyter server. The first example that he showed was somewhat trivial (e.g., incrementing a counter), but this allowed him to describe, in depth, how the computation was being distributed and executed. The second example showed how to run a grid search to find the best parameters for a Support Vector Machine, using dask-joblib to run this on the cluster.

One of my favourite lines from the Q&A was an off-hand comment that touches on developing ML systems: “you shouldn’t do everything in a Jupyter notebook” (because that’s not great for maintenance).

Prediction at the edge with AWS

The second talk was by Julien Simon (who blogs here), an AI/ML evangelist from Amazon. Starting at about minute 59 in this video, his talk focused on running machine learning predictions outside of data centers (‘at the edge’ — on cameras, on sensors, etc.). Achieving this entailed, perhaps unsurprisingly, going through a whirlwind tour of various AWS services that are available for machine learning systems. These included:

  • Defining and manipulating models with Gluon and MXNet;
  • Building and training models with SageMaker;
  • Using Lambda to write on-demand prediction functions;
  • Deploying the code to edge devices using Greengrass.

His talk closed with a demo of DeepLens, the “world’s first deep learning enabled video camera for developers” which was recently launched, showing real-time object detection in action.

Managing the gap between the engineer and data scientist role

One of the talks that touched on very interesting topics was by Beth Logan from dataxu, a data-driven advertising company. She described how they go about developing and automating the deployment of a machine learning pipeline (hence the title ‘changing tires while driving;’ the talk is online here) to support various applications in the advertising domain.

Moving away from the ML itself, there were some interesting points made about how to manage what a ‘data scientist’ does vs. what an ‘engineer’ does, in order for each role to play to their strengths. In effect, this was about letting data scientists develop and iterate on models, while leaving all of the job of productionising and scaling them to engineers — who also had to demonstrate that the production implementation performed as expected.

The intersection of data science and engineering is a topic that I could probably write an entire blog post about; suffice to say, we had a discussion at the end about whether such a divide is the ‘right’ way to do this, and how each discipline can upskill the other while collaborating.

Pipeline jungles in machine learning

The next talk was by Moussa Taifi from Appnexus, another company that deals with digital advertising. He discussed building various kinds of pipelines for click prediction, a common task in online advertising.

Moussa touched on a number of practical aspects of developing pipelines while going back and forth between research and production. These included getting into trouble with reproducing results once pipelines are overly complex (‘jungles’), model versioning for experiments, avoiding common issues like time travel (training on data that was created after the data in the test set), and whether it is better to go for systems with just-in-time data transformations and feature extraction vs. building models from a fixed set of features that are precomputed, regardless of the task at hand

Building a culture of machine learning

Lars Trieloff gave a high-level talk about nurturing a culture of AI inside of Adobe — focusing specifically on Adobe Sensei. His talk spanned three broad areas: brand, vision, and technology, and how the three needed to gel in order to foster a culture of machine learning within an organisation. Interestingly, he also touched on responsibility — and how all employees at the company needed to go through training and approval process when developing new machine learning tools.

Feasibility vs return on investment in machine learning

Poul Petersen from BigML gave a talk about how the company predicted 6 out of 6 of the 2018 Oscar winners — see this blog post which has some similar content. Oscars aside, he made an interesting observation about how to prioritise machine learning projects, based on comparing their feasibility and projected return on investment. If both are low, this is clearly a no-go area; if both are high, this is a no-brainer that you should already be working on. The remaining two categories were ‘postponable’ (low ROI, highly feasible) and ‘brainers’ (high ROI, not currently feasible).

He gave a similar analogy for the progression of which algorithms were his go-tos, depending on what stage of development a particular system was at: early stage, requiring rapid prototyping (logistic regression), mid stage, where you have a proven application (random forests), and finally late stage, where tweaking performance becomes critical (neural networks).

Startup pitches & panel — the European AI landscape

The startup pitches were dispersed across the day. The ones I saw here:

  • Logical Clocks: who have an enterprise machine learning platform called Hops that aims to improve the productivity of data scientists.
  • Antiverse: aim to enable antibody drug discovery in one day using AI.
  • Tensorflight: automate property inspections by analysing satellite and aerial data using machine learning.
  • Teebly: offers a single point of contact for a business’ clients, automating all of the various ways that they can get in touch with you.

Some of these startups participated in a startup battle at the end of the day, which was judged by an AI. While I was somewhat sceptical when I first heard about this, it was actually very entertaining. Each startup took turns being asked questions by an Alexa, with questions ranging from the size, experience, and structure of the team, and were scored on a variety of factors. The winners took home £100k!

The startup panel, instead, took a retrospective perspective — looking back at what worked for Twizoo (which was acquired by Skyscanner, shortly before I left), prediction.io (which was acquired by Salesforce after being founded in London), and Seedcamp. The recurring theme was the importance of focusing on the customer, rather than the machine learning: technology is an enabler to solve a customer’s pain, and the abstract machine learning problems that need to be solved along the way are nearly superfluous compared to the customer’s need.

There are many different take-aways from this day. One that stands out is that the European startup landscape in the machine learning space is still thriving and growing. And, indeed, Libby from Project Juno AI announced that they are starting another round of mapping this landscape — a project that’s definitely worth checking out and contributing to.