http://www.samprasdesign.com/
AR Graffiti History
Visual demo from Nathan Glitter shows how ARKit-powered Augmented Reality in public spaces can be employed to present changes over time.
As Tumblr doesn’t really support embedded tweets, you can view the video yourself here
(Source: twitter.com)
Outside
Short video by Vladimir Tomin is a series of sketches imagining the real world manipulated through PC desktop software interfaces:
So that’s what it feels like to go outside
(Source: youtube.com)
Zero One
Motion Graphics project by @ravenkwok is a music video whose graphics have been created with audio syncing and coded using the Processing language:
Zero One is a code-based generative video commissioned by Zero One Technology Festival 2018 (zeroone-tech.com/) in Shenzhen, PR China. In this project, I collaborate with L.A. based producer / music technologist Mike Gao (soundcloud.com/mikegao).
This project consists of multiple interlinked generative systems, each of which has its customized features, but collectively share the core concept of an evolving elementary cellular automaton.
The entire video is programmed and generated using Processing with minor edits in Premiere during composition.
Instant 3D Photography
Research from Peter Hedman of University College London and Johannes Kopf for Facebook can present images with depth from stitching together photos taken from dual-camera smartphones for viewing in VR (and can add extra effects):
We present an algorithm for constructing 3D panoramas from a sequence of aligned color-and-depth image pairs. Such sequences can be conveniently captured using dual lens cell phone cameras that reconstruct depth maps from synchronized stereo image capture. Due to the small baseline and resulting triangulation error the depth maps are considerably degraded and contain low-frequency error, which prevents alignment using simple global transformations. We propose a novel optimization that jointly estimates the camera poses as well as spatially-varying adjustment maps that are applied to deform the depth maps and bring them into good alignment. When fusing the aligned images into a seamless mosaic we utilize a carefully designed data term and the high quality of our depth alignment to achieve two orders of magnitude speedup w.r.t. previous solutions that rely on discrete optimization by removing the need for label smoothness optimization. Our algorithm processes about one input image per second, resulting in an endto- end runtime of about one minute for mid-sized panoramas. The final 3D panoramas are highly detailed and can be viewed with binocular and head motion parallax in VR.
Our tech stitches bursts of depth images from dual-camera phones into 3D panoramas, which look great in VR. The capture and processing is so quick and fast that you can now easily take 3D vacation shots when traveling.
(Source: visual.cs.ucl.ac.uk)
AWAKEN AKIRA
Motion Graphics short from Ash Thorp and XiaoLin Zeng is a labour of love tribute to the legendary anime:
Awaken Akira was created by two friends, Ash Thorp and Zaoeyo (XiaoLin Zeng), who wanted to collaborate on a tribute to the iconic anime, Akira, by Katsuhiro Otomo. It’s creation took over a year, as we had to coordinate our time on it with other project commitments. We hope you find it was well worth the wait and truly enjoy our efforts.
We would like to thank Pilotpriest for the masterful score, as well as Raf Grassetti and The Joelsons for their help on the project. We would also like to give our most sincere thank you to Otomo-san and all the men and women who helped bring Akira to life. Akira has and will always be a timeless and continual muse for all of us.
More background on development can be found at the official Awaken Akira website here
(Source: awakenakira.com)
How We Killed The Green Screen
Film director Michael Plescia presents updated method of SFX trick, the rear projection, incorporating realtime trompe l'oeil camera position tracking for convincing backdrops, called WallAR:
But, green screen is a great tool, right? Not for one filmmaker.
Watch the story of why and how @MichaelPlescia tried to reinvent filmmaking, visual effects and co-found a company in order to make his feature film even a possibility.
See how the story vision for The Mop Liberator lead to the creation of a technology that is allowing the filmmakers to photograph an imagined cyberpunk world in-camera without any post-production using ARwall, a visual effects compositing technology that combines new mixed-reality screens along with the oldest (and maybe best) trick in the filmmaking book.
This piece delves into the inception, development process, and shows the production test footage of the first ever real-time, in-camera, real-light, real-lens, perspective-adapting, mixed-reality, rear-screen, compositing technique.
(Source: vimeo.com)
De tuin van Julin
Latest video in a series from Water Ballet continues their practice of visual abstraction using oil and paints:
Birthday present for my brand new niece Julin, hope she like the colors in a few years:)
Everything you see once happened in my little aquarium.
Music and video by Kamiel Rongen
(Source: vimeo.com)
2018 PyeongChang Winter Olympic Opening & Closing ceremony projection mapping Making
by .mill
2018 PyeongChang Winter Olympic Opening & Closing ceremony projection mapping Making
projection mapping : dot-mill
Image Inpainting for Irregular Holes Using Partial Convolutions
Latest work from @nvidia Research can provide convincing editing in photographic images with minimum input:
Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that has holes or is missing pixels. The method can also be used to edit images by removing content and filling in the resulting holes.
More at NVidia Research here
(Source: news.developer.nvidia.com)
Just A Line
Google Creative Lab have released an Augmented Reality doodle app that will work with all modern Android phones, and allows you to create a video of your work:
Make simple drawings in AR, then share your creation with a short video.
Based on our previous open-source experiments with ARCore, Just a Line is an app that lets you make simple drawings in augmented reality, then share your creation with a short video. Touch the screen to draw, then hit record and share what you make. If you’re a developer, you can use the open-sourced code as a starter project for ARCore.
(Source: experiments.withgoogle.com)
The Parallax View
Project from Peder Norrby is an IphoneX visual toy using TrueDepth facetracking to produce a Trompe-l'œil effect of depth from the position of your head:
Explainer video - enable sound!
The app, called #TheParallaxView, is in review on @AppStore#iPhoneX #ARKit #FaceTracking #madewithunity pic.twitter.com/6P8ofGZqP4— ΛLGΘMΨSΓIC (@algomystic)February 28, 2018Yes it’s ARKit face tracking and #madewithunity … basically non-symmetric camera frustum / off-axis projection.
The app is currently in review, but Peder plans to release the code to Github in the future for developers to experiment with.
You can follow progress at Peder’s Twitter account here
The app is now available at the App Store [Link]
(Source: twitter.com, via kikori2660)

