Machine Learning on iOS with CoreML and React-Native

If you're like me, and you've always dreamed of developing an application capable of differentiating a hotdog from your neighbor's dachshund using your Iphone, this tutorial is for you. To do this, we will use React-Native, and machine learning. Let's go !
Machine Learning solutions have long been available in the cloud via APIs, but they required an internet connection and could be time consuming and expensive to process.
With the release of iOS 11, Apple has made its Machine Learning library, CoreML, available to developers. Now, it is no longer necessary to have a powerful server to process requests or to call on a third-party API. CoreML takes care of everything with the computing power of your smartphone.
Are you curious how to integrate it into your applications? It's easier than you might think.
Let's take as an example, the famous parody application Not Hotdog, discovered in the series Silicon Valley, which, when the user points their smartphone at an object, instantly tells them whether the latter is looking at a hotdog or not.
Not hot dog

What you will learn in this part

  • Install your environment
  • Collect data
  • Create your image classification model with Turi Create
  • Recognize a hotdog using your iPhone's camera

Prerequisites

  • macOS 10.12+
  • or Linux (with glibc 2.12+)
  • or Windows 10 (via WSL)

  • Python 2.7, 3.5 or 3.6

  • an x86_64 architecture

  • Xcode 9

  • iOS 11 +

What is Turi Create?

It is a tool that "simplifies the development of custom machine learning models", which can then be used in applications using the CoreML library.
Turi Create is based on the Deep Learning framework Apache's MXNet
Turi Create provides developers with flexible technology to classify images, detect objects, design recommendation systems, etc.
The tool is extremely easy to use, flexible and fast.

Its installation

As with many Machine Learning tools, python is the language used. But don't worry, the code is very simple to understand.
It is recommended to use a virtual environment, virtualenv, entering the command

pip install virtualenv

If you don't have pip, the Python package manager, you can install it with Homebrew by entering the command

brew install pip

Then,

// Créer un environnement virtuel Python
cd ~
virtualenv venv
// Activer votre environnement virtuel
source ~/venv/bin/activate
// Installer Turi Create
pip install -U turicreate

Collect data

Before we can train a model, we need data.
These data can be found on the website ImageNet which is a very large image database, with over 14 million results.
For our project, we need two categories of images: Hotdog and… Not Hotdog.
Here is the link for the first category: http://www.image-net.org/synset?wnid=n07697537.

You can also retrieve all the links using the public API, via the button Download in the tab Downloads

http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n07697537

Using turicreate-easy-scripts

You can find on the repository turicreate-easy-scripts a set of scripts that will make your life easier.
So after cloning the repos and installing the folder dependencies download-image :

cd download-images
npm install

You can run the command:

node imagenet-downloader.js http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n07697537 hotdog

And get a folder full of pictures of hotdogs.
All that remains is to do the same for Not Hotdog. So choose the category(ies) you like the most and categorize them as not-hotdog

node imagenet-downloader.js http://image-net.org/api/text/imagenet.synset.geturls?wnid=n00021939 not-hotdog

Train the model

Once the images have been uploaded and categorized, all that remains is to train the classification model.
To do this, you can use the python script made available in the repository downloaded earlier

cd train-model
python train-model.py


You will obtain after about ten minutes (or more depending on the number of images to be processed) a file Classifier.mlmodel that we can now use.
gif ml

Create a React-Native project

First, you will need to create a new React-Native project.
Open your terminal, navigate to your projects folder and enter the following command:

react-native init notHotDog (ou tout autre nom)

After a few minutes, everything will be installed and you'll be ready to move on.

Install the CoreML library

We will use the library react-native-core-ml-image

npm install --save react-native-core-ml-image
react-native link

Go to your project, then to the “ios” folder and double click on the notHotDog.xcodeproj file to open it in Xcode

Configure the project

By default, React-Native projects are configured to primarily use Objective-C. The book store react-native-core-ml-image being written in Swift, it will be necessary to change some parameters in the project
First of all, we will have to add a Swift file to the project


The name does not matter, it will not be used anyway. A message then appears suggesting that you create an “Objective-C Bridging Header”: this is the file used to make the link between Swift and the header files of the Objective-C Classes

Finally, the library being written in Swift 4.0, you will have to specify the version of Swift to use (3.2 being the default version).
Click on the root of the project (notHotDog), select the “Build Settings” tab, then at the very bottom, change the version of the Swift language to use.

Import the CoreML model into the project

Before moving on to the programming part, all that remains is to import our image classification model into the notHotDog project.
Drag and drop template Classifier.mlmodel and rename it notHotDog.mlmodelc (no, that's not a typo)

CoreML doesn't work directly with _.mlmodel files, you have to translate them to _.mlmodelc (c for compiled) first, but our Python script has already taken care of that. (see last line of the file train_model.py)

# Export for use in Core ML
model.export_coreml('Classifier.mlmodel')

Allow access to the camera

In the Info.plist file, click on the little plus to the right of each entry and add “Privacy – Camera Usage Description” as shown below

That's it for the setup! It remains only to implement all this.
gif code

Implement the code

The first thing to do is to import the library react-native-core-ml-image in the project. For this example, all the code will be in the file App.js

import CoreMLImage from 'react-native-core-ml-image'

Next, replace the entire render() method with the following:

render() {
    let classification = null;
    if (this.state.bestMatch) {
      if (this.state.bestMatch.identifier && this.state.bestMatch.identifier === "hotdog") {
        classification = "Hotdog";
      } else {
        classification = "Not hotdog";
      }
    }
    return (
      <View style={styles.container}>
          <CoreMLImage modelFile="notHotDog" onClassification={(evt) => this.onClassification(evt)}>
              <View style={styles.container}>
                <Text style={styles.info}>{classification}</Text>
              </View>
          </CoreMLImage>
      </View>
    );
  }

The method onClassification allows us to receive updates when a new object has been classified. It returns the following data:

[
{
identifier: "hotdog",
confidence: 0.87
},
{
identifier: "not-hotdog",
confidence: 0.4
}
]

We just have to implement the method onClassification who is responsible for finding the best classification.

const BEST_MATCH_THRESHOLD = 0.5;
/** */
onClassification(classifications) {
    let bestMatch = null;
    if (classifications && classifications.length) {
      classifications.map(classification => {
        if (!bestMatch || classification.confidence > bestMatch.confidence) {
          bestMatch = classification;
        }
      });
      if (bestMatch.confidence >= BEST_MATCH_THRESHOLD) {
        this.setState({
          bestMatch: bestMatch
        });
      }
      else {
        this.setState({
          bestMatch: null
        });
      }
    }
    else {
      this.setState({
        bestMatch: null
      });
    }
  }

Based on previous data, then bestMatch will be

{
identifier: "hotdog",
confidence: 0.87
}

Here is the full code:

import React, { Component } from "react";
import { Platform, StyleSheet, Text, View } from "react-native";
import idx from "idx";
const BEST_MATCH_THRESHOLD = 0.5;
import CoreMLImage from "react-native-core-ml-image";
export default class App extends Component<{}> {
  constructor() {
    super();
    this.state = {
      bestMatch: null
    };
  }
  onClassification(classifications) {
    let bestMatch = null;
    if (classifications && classifications.length) {
      classifications.map(classification => {
        if (!bestMatch || classification.confidence > bestMatch.confidence) {
          bestMatch = classification;
        }
      });
      if (bestMatch.confidence >= BEST_MATCH_THRESHOLD) {
        this.setState({
          bestMatch: bestMatch
        });
      } else {
        this.setState({
          bestMatch: null
        });
      }
    } else {
      this.setState({
        bestMatch: null
      });
    }
  }
  classify() {
    if (idx(this.state, _ => _.bestMatch.identifier)) {
      if (this.state.bestMatch.identifier === "hotdog") {
        return "Hotdog";
      } else {
        return "Not hotdog";
      }
    }
  }
  render() {
    return (
      <View style={styles.container}>
        <CoreMLImage
          modelFile="notHotDog"
          onClassification={evt => this.onClassification(evt)}
        >
          <View style={styles.container}>
            <Text style={styles.info}>{classify()}</Text>
          </View>
        </CoreMLImage>
      </View>
    );
  }
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    backgroundColor: "transparent"
  },
  info: {
    fontSize: 20,
    color: "#ffffff",
    textAlign: "center",
    fontWeight: "900",
    margin: 10
  }
});

All you have to do is run the code on your iPhone (the camera will not work on the simulator).
If you've done everything right, the app will ask you for permission to access your camera and you'll be able to tell a hotdog from your neighbor's dachshund.
Thanks for reading me. If you liked the article, do not hesitate to share it on social networks!
Article written by Jeremiah Zarca.