PRODUCT LIBRARY
a simple tap at the temple of the frames activates rich directional sound, allowing wearers to 'seamlessly press play on the soundtrack of their lives'.
connections: 73
no, volkswagen isn't changing its name to 'voltswagen' in the US.
connections: +530
virgin galactic designs its spaceship III with a modular design that will lay the foundation for the manufacture of future space planes.
connections: 78
the wind catcher by KiteX — which comes in two versions — works in conjunction with solar panels or separately by generating power in situations when solar power is not enough.
connections: 49
Seems like the Weather Warlock’s http://weatherfortheblind.org/ has been reimagined for a corporate client.
You can’t say it’s not AI without a cogent argument of what is and what is not AI. Current AI is mostly a form of Narrow AI using deep neural networks that were designed to act like the organic form. It is very likely a form of one of these networks called a GAN (Generative Adversarial Network) that encodes and learns the meaning of input data and then mimics the learned data and then there is an ‘adversary’ that provides the reinforcement of whether it ‘sounds’ like bjorks music.
So you have two AI’s one that can recognize the difference between bjork and not-bjork and then you have the generator that keeps trying to generate things that sound like bjork and uses the feedback of the adversary. Once trained you can then have this generator create new original pieces in the ‘style’ of bjork based on some sort of input, in their case a visual recognition AI that is taking the semantic meanings of various clouds and things it’s been trained on and then generating them in the style of bjork. So what you’re hearing is a form of style transfer from clouds or birds to bjork music…
This isn’t AI. It’s just procedurally generated music based on her catalog. Where does it learn? Where is the reinforcement? Having a lot of random inputs to make something that sounds like human generated music does not make it AI. Training it to identify fluffy clouds is a form of AI, but what do those fluffy clouds have to do with the output? Fluffy or not it will still generate music. Will fluffy clouds mean happier sounding music? How does it know what happier sounding music is and within the context of the guests?