Skip to content

StanfordBDHG/SpatialContinuity

Spatial Continuity

Overview

This repository contains a implementation of Spatial Continuity. Spatial Continuity is a novel concept that proposes a combination of spatial computing and large language models (LLM) to provide seamless transitions between virtual and real-world contexts for making the world more accessible to users with low vision. Ultimately, this should lead to a truly continuous experience where the friction that accessible technologies introduce, e.g., by setup and positioning, is fully removed.

This implementation is part of our work on the "Spatial Continuity: Investigating Use Cases of Spatial Computing for Users with Low Vision" poster, which was accepted at the 27th International ACM SIGACCESS Conference on Computers and Accessibility.

A screenshot from an Apple Vision Pro looking at a poster of the Utah Teapot and a copy of the book Gödel, Escher, Bach, both positioned beside each other on a flat surface. The person wearing the Vision Pro (and taking the screenshot) is holding an iPhone showing a full-screen viewfinder. The iPhone's camera feed is being livestreamed to the Spatial Continuity app on the Vision Pro, which is showing two windows in front of the user, above the poster and the book. One window is showing the livestream from the iPhone, with two additional menu items below, reading "Onboarding" and "Describe". The other window is showing a static pin (screenshot) from the cover of the Gödel, Escher, Bach book, which was taken from the iPhone's livestream to Spatial Continuity.

Functionality

The current prototype implementation of Spatial Continuity consists of two apps: the Spatial Continuity app on Vision Pro and the Spatial Continuity Camera app on iPhone.

The Spatial Continuity Camera app functions as a hand-held magnifying glass. After connecting, it streams the camera feed from the iPhone to the Spatial Continuity Camera app running on Vision Pro.

The Spatial Continuity Camera app can open screenshots from the camera livestream in a separate window for further inspection. If an OpenAI API key is provided, spoken image descriptions will be generated as well.

Setup and Usage

  1. Upon opening the Spatial Continuity app for the first time, you will be presented with the SpeziLLM onboarding window where you may enter your OpenAI API key. An OpenAI API key is required for generating spoken image descriptions.
  2. Ensure that both your iPhone and Vision Pro are on the same network. When opening the Spatial Continuity Camera app on iPhone, the connection should be established automatically.

Contributors

You can find a list of contributors in the CONTRIBUTORS.md file.

Open Source Contributions

As part of this work, we made the following open source contributions:

Contributing

Contributions to this project are welcome. Please make sure to read the contribution guidelines and the contributor covenant code of conduct first.

License

This project is licensed under the MIT License. See Licenses for more information.

Our Research

For more information, check out our website at biodesigndigitalhealth.stanford.edu.

This project is the result of a collaboration between the Stanford Mussallem Center for Biodesign and the Technical University of Munich.

Stanford Byers Center for Biodesign Logo Stanford Byers Center for Biodesign Logo Technical University of Munich Logo (Blue) Technical University of Munich Logo (White)

About

Spatial Continuity Apple Vision Pro Project

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published

Contributors 2

  •  
  •