LabelLens – AI Product Label Analysis

LabelLens is an innovative web app that lets you quick scan a product label using your smartphone or device camera, and provides a clear, easy-to-understand breakdown of what’s in your food and other products, helping you understand what these confusing terms are & leading to more informed choices about the products you buy.

Key Features:

  1. Instant Ingredient Analysis: Simply snap a photo of any food label, and LabelLens will quickly identify and explain all ingredients, highlighting any potential concerns.
  2. Wellness Insights: Get a recommended health rating and analysis for each product, helping you make smarter choices aligned with your dietary and wellness goals.
  3. Multilingual Support: LabelLens can recognize and translate food labels in multiple languages, making it an invaluable tool for travelers or when shopping for international products.
  4. Customizable Alerts: Set up personalized ingredient watchlists to easily avoid allergens or other ingredients you’re trying to cut back on. Additionally, you can also scan for healthy ingredients you want to focus on. 
  5. Save and Compare: Build a database of your scanned products, allowing you to easily track and compare different options over time.

LabelLens is built using Quaser and Appwrite frameworks, and is a practical demonstration of a new micro-SaaS framework utilising these platforms. More news on that coming soon!

Spatial Music R&D

Sound/Music R&D

I completed a Music Masters degree in 2002 focused on Real Electronic Virtual instrument design and performance (see also http://www.linseypollak.com/past-projects/rev-real-electronic-virtual/ ).

I’m deeply interested in music technology and new music genres across all boundaries. I’m also a producer/developer of educational apps such as the benchmark Harmonica training app HarpNinja. As a Creative Technologist I also work with  different technologies across many areas.

Update: 2018-06-02: Accepted into Oculus Start Developer Program
Update 2018-01-09: Selected by Microsoft for the Windows Mixed Reality Developer Program .

Currently exploring new areas in music and sound with VR/AR/MR, AI/ML,  and spatial audio/music with various projects and collaborations, such as:

Groove Pilot

Wings

Music Flow

Spatial Music Visualizer

Immersive Audio and Musical AI

Generative Music System

Since 2005 I’ve performed online in virtual worlds like SecondLife playing solo (with and without my robot backing band) and also real time music jamming with multiple musicians located in different countries. You can check out my live online music performance website @ http://komuso.info/ and you also read more about it in the Streaming Live Music project.

You can hear some of these musical explorations on hearthis:

You can hear some of these musical explorations on Soundcloud:

Here’s a video demonstrating live networked music performance between myself in Tokyo and fellow SL musician Hathead Rickenbacker in Toronto, Canada.

Generative music systems are an interesting area that I’ve done a lot of research and experimentation in as well.
Moozk was an experimental audio visual app I developed for public use using a wacom pen tablet to drive a painting application that also produced generative music as you drew. Kids seemed to love it.

Blue Noise was an experimental audio visual performance using an eBow, slide guitar, digital effects and a PC running audio responsive custom designed graphics.

I’ve given some talks and performances about live music in SecondLife:

. Mixed Reality Komuso On The Future of Music Online

. SynaesthAsia: Dynamic, Live Music/Visual Spectacular from Musicians in Two Countries

Music Flow

Adaptive music composition driven interactively by real time 3D artificial intelligence.

Prototype for a VR project in the health and art therapy market.

Built using Unity3D

Pin It on Pinterest