Misty API Explorer

Easy Commands

Asset Management

To add an image or audio asset to the robot, first drag-and-drop the file to the appropriate dropbox or click on the box to browse for the file. Once the file has been successfully added, click the Save to Robot button. The maximum size for files is 3MB. Accepted audio file extensions are: .wav, .mp3, .wma, .aac. Accepted image file extensions are: .jpeg, .jpg, .png, .gif.

To delete a file from the robot, first populate the list of files by clicking the Populate Audio/Image List button. Select the file from the dropdown list. Then click the Delete Clip/Image button.


Manual Driving

Sensor Data

Use the switches below to subscribe via websockets to data from the robot's sensors. The sensors' values will stream to the text box next to the switches.

Time of Flight
Bump Sensors
Actuator Position
Battery Voltage
Cap Touch Sensor
Drive Encoders


Websockets provide up-to-date information on specified objects. Right now you may subscribe to the objects as shown below. The Sensor Reading Websockets are an example showing Time of Flight and Battery Charge messages, where the values get streamed to the text box next to them.

To see the websocket responses, you must view them through the browser console.

The generic subscribe example allows you to pick from our growing list of options to subscribe to. If you only select a named object, the Event Name for that object will be created as the Named Object name and that is the name you must use to unsubscribe. If you do not enter in a debounce, it defaults to 250 ms (the messages will be sent (at most) at millisecond values of the debounce). Too many socket subscriptions at a fast debounce can cause performance issues, so remember to unsubscribe when you don't need data and to set the debounce as high as is appropriate for your needs.

To filter to specific details in subscription, you can put the data property path in the ReturnProperty field, which will cause it to return that data. The data property path is specified from the Named Object and currently must be discovered by examining the data packet. For example, if you want Mental State (which you can't subscribe to) which is an object in SelfState, you can put MentalState in the ReturnProperty field. If you want the specific Valence value of the Affect in Mental State, your ReturnProperty will be MentalState.Affect.Valence

You may also use the same pattern to filter on data. In this case, the data will only be sent if the filter is true. For example, if I only want to return the above Mental State's Affect data if the Dominance value in Affect is equal to 1, I would have the following settings.

NamedObject : Self State
Property: MentalState.Affect.Dominance
Comparison: ==
Value: 1
ReturnProperty: MentalState.Affect

The currently allowed comparison options are: ==, !=, <, >=, >, <=, empty and exists

Websockets are needed for most of the following commands. API Explorer automatically opens a websocket when you enter an IP address and connect to the robot (at the top of the page).

Other Websockets

Beta Commands

Computer Vision

Enter a name in the text field and click the Start Face Training button. Position your face in front of the camera, about a foot or two away, for about 30 seconds. Misty will let you know when she has the face information she needs to complete the face training when a message about the face training embedding phase is complete is presented. Only select the Cancel Face Training if you want to cancel training while Misty is still trying to detect a face. The camera will take a series of pictures of your face and attempt to create a face matrix so it can recognize you in the future.

Currently, the face detection and recognition data comes directly from the sensors so some of the data may be incomplete. We are currently working on improvements to aggregate this data.

Head Commands

The head movements are still very dependant upon each robot at this time. If your robot's head has not been calibrated or is front heavy, it may not move.

  • Back (Up)
  • Forward (Down)


    Mapping and Exploring

    Following is a "quick start" version of the instructions for mapping and exploring. For a more detailed explanation of these functions, visit our documentation here.

    Select Start Mapping. After a few seconds you should obtain pose. Once pose is obtained, the pose circle will turn green and the pose updates will start streaming. If you never get pose, select Stop Mapping and then Reset Depth Sensor. Check the status with Get Depth Sensor Status, and if valid, try again. If you do not see pose updates, it is also possible the lighting is too low for the robot. If a status of 'Ready' is not returned after multiple Reset Depth Sensor and Get Depth Sensor Status calls, the depth sensor may be in a bad state requiring a robot reboot.

    Select one of the drive options (Turn in Circle, etc), or drive the robot around for a fuller map. When you are done with mapping, select Stop Mapping and then Get Map to retrieve the map from the robot. When creating a map or tracking, be sure to adjust the velocity so the robot moves slowly and keeps its pose.

    To drive to a location, figure out the X, Y coordinates X is Up (forward), Y is across) and then select Start Tracking. You should start to see pose updating again. You can now enter in the waypoints where you want the robot to drive and select FollowPath (wayponts should be entered in the form of: X1:Y1,X2:Y2,X3:Y3)

    When you are done driving, you can select Stop Tracking in order to release resources on the robot.


    Pose Pose

    Read coordinates from the bottom right corner.

    When determining waypoints, X is the direction the robot is looking at the start of mapping and is read from the bottom of the map to the top of the map.

    Y is read from right to left with zero being the right side of the map.

    Pixels per grid is a value from 1 to 20 that indicates the number of pixels per grid cell on the rendered map. The higher the number, the larger the map.

    Sorry, but your browser does not support the HTML5 canvas tag.

    Pose Pose
    Follow Path

    4K Camera - Photo

    4K Camera - Video

    Ultra-Wide Vision Camera

    These controls allow you to take a black-and-white photo with the ultra-wide vision camera on Misty’s Occipital Structure Core depth sensor. When you take a photo, it is automatically either displayed onscreen in this window or downloaded to your computer. Note that displayed photos are not saved and cannot be downloaded at a later time.

    Before taking a photo, click Start Camera. When done, click Stop Camera.

    System Updates

    For more information, please refer to the documentation at https://docs.mistyrobotics.com.

    You can use this to update a specific component that fails to update with the rest of the system. Check the box next to each component to attempt to update. Click Perform Targeted Updates to start the update process.

    Override Update & Battery Checks
    Important: Only use this option if asked to do so by a member of the Misty Robotics staff!

    Connect Wifi

    If you are having trouble connecting with the app, you can connect the robot to your network with an ethernet cable through the backpack, connect to the ethernet IP at the top of the page, and then set the wifi network here. The process can take a minute or two before the robot is fully connected.