Contact
nataliasofiamiranda@gmail.com
nataliasofiamiranda@gmail.com
This week I managed to get a script onto paper. It isn’t written in the form of a traditional script, but gets nearer to a treatment. I thought this was more appropriate for my short experience (60-90 seconds) and it gave me much more liberty to deconstruct interactivity.
I won’t include the full breakdown in this post, but the experience takes place in a movie theatre and consists of three characters: a concession-stand employee, a movie-goer and an Ophelia-like screen actress. As of now, my main objective is to induce an altered state of consciousness, akin to that of dreaming. In its current state, it gives the user the constant possibility to switch between the movie-goers POV and a bird’s eye view. I have chosen to limit this freedom at two critical moments in the story: the beginning or set-up of the action and the climax/ending. I believe this will make for a more satisfying experience, since it provides a circular paradigm (given the specific context of the experience).
As for the storyboard, I went into TiltBrush a couple of hours and sketchily explored an idea I have for what a storyboard can be in VR. Since space will become more and more narrative in VR, I thought a storyboard could become a structure which reflects this story. I made a theatre which incorporates the ambiance and color palette of the intended world I want to create, and made it an exhibition place. Inside, there will be a color key for different characters, the interface, and different POV’s. The corridors diverge when viewpoints diverge.
I opted for this method because I think of story as structure and structure and story. Wasn’t it Goethe who said to think of architecture as frozen music and music as liquid architecture? There’s an undeniable relationship between how we perceive spaces and how we perceive abstract expressions of emotion. For now, I am exploring this idea in the context of VR experiences. Storyboard could be an exhibited narrative space.
It’s going to be a short update this week. Two new things:
(1) The water now responds to input. When the space bar is down, the ocean recedes and rises slightly, and once it is released, it goes back to its initial position. At this stage I can start preparing the breath monitoring belt so we can later translate that keyboard input into the numerical values the belt will produce.
(2) I added a small island where the user will be able to teleport around.
Some conceptual challenges consumed most of my time this week. I need to figure out what sort of gaze interaction would be appropriate for this space, given my objectives of foremost prioritizing breath and second of increasing awareness of the relationship between our breath frequency and visual perception. I want the virtual environment to not be challenging or taxing on the senses, so I am considering working in a greyscale world with low contrast. Perhaps the gaze interaction could consist in making small RGB values appear in the TV static like ocean. Unfortunately, this could prove to be counterproductive given my aims of avoiding eye strain, since it can distract from the central focus of the world, our lungs.
All this I am starting to think about. As a side note, since its not a part of the requirements of this unit, I also found it compelling to see the mesh used for the water as a dome above us. Have a look for yourself:
I think this is an interesting texture to work with from many perspectives. Yet it is most probable I will refrain from using it anywhere else because it will detract from the bio-feedback ocean.
As for my MR app, the main challenge I now have to face is to ensure that it always has a correct perspective. If the virtual hand of the phantom limb user does not correlate with their perspective, the visual feedback will have no efficacy. For this I have been looking at cylindrical trackers and what they can do to ensure the virtual arm and hand’s correct orientation.
This is all for this week, and I hope you enjoy seeing this project come to virtual life. Comment any suggestions or reactions. I would love to know what would make this environment more useful.
I was lucky enough to watch a 14 min VR film some days ago, which changed my whole approach to this story-telling unit. I had been convinced that this 60-90s story of mine needed an interactive element for it to be successful in VR, but this film proved me wrong. I watched it with a cardboard headset on my phone and my eyes had a hard time adjusting to the camera focus, yet it still moved me to tears. The camera angles and movements are compelling and intuitive. The use of close-ups is ultra-successful and right on the necessary emotional beats. It proved, for me, that cinematic VR could very well be defined by the simple choice of letting yourself go as a VR user and being led. Of course, leading well as a director is no easy matter.
This realization comes at a good time because my priority for this unit is narrating space. I am now convinced camera movement will be my protagonist, and the user will only need to sit and look around. This may seem boring to users who are used to more of a gamification of experiences, but trusting a narrative to move for you has great potential. It is imperative, however, that it is a camera that thinks. My instructor has recommended a book on spatializing narrative which I ordered for my personal library and am very keen to read, but still hasn’t arrived.
Here is a sketch I did exploring character design and palette.
I have been exploring some character design, but my goal for the incoming week, now that I have decided to fully limit interaction to head movement, is to write a starting script. This will raise many questions about scripting 360 stories, and is sure to be a feat. Wish me strength and flexibility as I start building this narrative world.
For the first part of this unit, I have decided to pursue a meditative environment, where an ocean responds to the values of a user’s breathing. I will scrap the idea of including a hammock as a physical object or possibility in the program, since it might be outside of the scope of this project.
The ocean will be an abstracted sea, made to look like TV static with an extra dimensionality. Here are some updates:
(1) I went down to the Creative Technology Lab at UAL to see what would be my best option for breath detection. I was told of an experiment another student had done before, where she made a belt that translated the expanding and constricting of the lungs to values. We decided that might be my best option, since other more accessible options, like a microphone, would only record very exaggerated breathing and completely miss subtle cues like inhales.
I was told to first worry about getting the ocean to move and respond to some sort of input, and then we could worry about the bio-feedback. So I got to work on the dotted sea.
(2) Zhan Gurskis, our lecturer in immersive and interactive design, helped me add a script to a plane that makes it move like an ocean. This was done first by aligning a series of spheres and adding a transform function to the x position.
(3) I like this form of the dotted ocean, but I would prefer more of a density, and more of the illusion of volume. Zhan showed me another approach, which instead imports a simple plane from Maya and uses a shader to animate its points. This ended up looking something like this:
Going forward I want to play around with the 2D texture on the dots to give more of an illusion of their three-dimensionality. Ideally users could see the dots as if TV static were protruding out of a screen. The sea would retract once they inhale and crash against the shore as they exhale. Maybe they can control the color of the noise in the audio (there’s a whole rainbow of noise audio).
For the AR part of the assignment, I have carried on with the idea of direct visualization as a treatment for PLP (phantom limb pain). Mirror therapy (where unilateral amputees use a mirror to duplicate their working limb) has been shown to be a promising alternative. Most subjects describe that they can feel their phantom limb moving when viewing the reflection of their intact limb.
I want to create an AR app where the missing limb can become an avatar limb with no aim at photo-realism. I want to do this because I think an explicit suspension of reality might be more effective than an attempt at fully tricking the brain it is the original limb. Further, it seems more visually striking to me and might serve to add a symbolic dimension to pain that could prove to be useful. I imagine the avatar would look something like this:
The user would have the option to touch where it “hurts” and get visual validation of the pain.
From an article on the effect of direct observation on PLP:
The results of this specific study showed that direct visualization was more effective than mental visualization. Because of this, I would like to add some basic animation the user can choose through to observe as if happening in his own body, i.e. like those cited above.
For the animation of the limb, I haven’t decided if I also want it to be able to mirror another limb in the case of unilateral amputees. Nevertheless, I do not exclude the possibility of making it accessible to those who lack both limbs, be they arms or legs. This last use will require a different set-up which I will need to take into consideration.
This is what is going on so far around this unit, and I’m excited to see how it progresses.
“If the poet’s duty is to reveal all their secrets, then I rather do so in my sleep.” (Jean Cocteau, I think? I can’t remember where I read or saw this, and if anybody knows the source, please leave a comment! I’ve been racking my brain for months now.)
“The ultimate desire of any artist is to get someone to listen to their dream. With film, I can force you into it.”
(Another whose authorship or precise wording I can’t recall, yet sticks with me, possibly also Cocteau…)
Our second unit involves VR as an evolving storytelling medium. Many are changing the way they perceive in order to bring storytelling to this new technology, which necessarily means coming to terms with its limits as it stands today. Namely, not breaking the plausibility (once it is broken, it is possibly lost forever), inducing a sense of spatial presence which may fluctuate (on this users are a bit more forgiving), avoiding motion sickness, designating clear points of focus and interest, and the list goes on…
Here are the requirements for this unit:
(1) A short script for a linear narrative VR experience (60-90 seconds)
(2) A storyboard in one of the available VR apps (with an integration of VR related concepts, like interaction, presence, or the uncanny valley, etc.)
(3) A VR environment based on the storyboard
(4) A critical report exploring
(a) the concept (500 words)
(b) why it’s best suited for a 360 immersive space compared to linear media/ how it reflects the relevant concepts (max 1200 words)
For this unit I have a clear objective. I want to prioritize the concept of spatial story, since I think it is an essential ingredient of this new medium. I want to create a sense of constructive and productive mystery that will blur the lines between the physical world and the virtual world. This I will achieve by creating a space and characters with levels of abstraction (not photo-realistic, but painterly). The script in itself will revolve around the language of dreams, where most of our visceral reactions are produced.
My objective is heavily influenced by both my experience with dreaming (I have kept a dream journal for more than a year now) and my recent research of immersive narratives. The term “environmental storytelling” was popularized in the gaming world, but may prove to be relevant in VR narratives, as well. According to Henry Jenkins of MIT,
“Environmental storytelling creates the preconditions for an immersive narrative experience in at least one of four ways:
(1) spatial stories can evoke preexisting narrative associations
(2) they can provide a staging ground where narrative events are enacted
(3) they may embed narrative information within their mise-en-scene
(4) or they provide resources for emergent narratives.”
(This citation was taken from John Bucher’s book Storytelling for Virual Reality, p. 66)
Here is the space I will be recreating, literally from a dream I had of a theatre:
I believe the 90 second story will go something like this: the user begins in front of a theatre curtain and has to push it aside to come inside. In it, they will suddenly get a view of this theatre with its three characters. The one laying down may look at the viewer if the viewer chooses to pass by them, but the other will beckon for them to sit beside them. I’m still not sure how I will make sure they choose to do that, since an uncanny valley effect may be induced and I don’t necessarily want to force discomfort.
I may need to explore the concept of vection as it relates to VR, since it creates the illusion of self-motion. An example of this is when one is sitting in a stationary train and sees another depart. One may perceive their own train as moving. If this illusion is achieved (without motion sickness!), I believe it may be a great victory for all VR storytelling.
The problem of getting the user to sit by the character notwithstanding, this narrative may also be an experience where the ending remains the same (is set in motion) no matter what the user chooses. In this case, a flooding of the theatre with the screen as the spring. If the user does choose to sit by the man and listen to what he has to say, they will be subject to a reflection on dreams that will bring the unreality to the forefront while the theatre gets flooded. This interaction can serve as a verbal confirmation of the visual experience.
I have chosen this approach because one of my main interests is to use VR to create new mythologies. For this, I believe, a spatial story and a visceral presence is necessary to induce later reflections of narratives. I want emotion to be the primary effect, and in this case, a sense of wonder and curiosity. Should it fail, I think it will be a worthwhile resource to understand how users respond to these kind of abstract spatial narratives.
For my creation of atmosphere, I am currently looking at these examples taken from traditional 2D media.
One of the most potent effects VR may have is a redefining of our understanding of authorship and story… cheers to that! I would love to hear reactions to this sort of approach I’m taking for Unit 2, as I’m sure they would inform the evolution of this story (if one can even call it that yet…).
I had two motives when coming into VR. The first was to create meditative environments that serve as a breath of fresh air for the mind. The second, loosely related but with less explicit medical intentions, was to bring painting to this medium.
We will be working with Unity throughout the course. Here are the limitations for this unit:
(1) The VRE (Virtual Reality Environment) must include:
(a) a computer-generated scene including terrain, lights, and a skybox
(b) navigation of the space, be it teleportation or any other sort of locomotion
(c) some sort of interaction, including but not limited to controller-based and gaze interaction
(d) optional 360 sound
(2) The MR content must include:
(a) computer-generated content or live-action content activated via QR code
(b) computer-generated content must be 2D or 3D animation (max 5 seconds) or a 3D object
(c) exportation for phone or tablet on a platform of choice, Android or iOS
(d) sound
As of week one, my idea for the VRE is a recreation of my favorite hammock on the beach. I decided for this approach since it remains true to my vision for a meditative environment. Further, should it fail, I have it as a point of departure for the rest of my journey. Some initial objectives:
(1) To prepare an environment that can later be added onto with features for a meditation program, such as validation of time past (i.e. every ten minutes spent there, there is a gratification in the form of pleasing visuals) and/or unlocked potential within the world
(2) A crucial one: biofeedback or the integration of breath monitoring. I intend on making a shader in Unity for water whose texture is white noise. The waves crashing on the shore should correspond to the breath of the user. This is meant to increase awareness of the effects of breath on perception, since there seems to be a correlation between anxiety or lack thereof and conscious breathing. This is seen in people who have frequent panic attacks and learn to control their breathing to prevent them at the onset.
(3) Manipulation of graphic quality so as to not disturb the eyes as much as one can: this can be done by offering the option of a dark mode, like it is done in many modern apps. This ensures the user can remain in the environment for longer periods of time. It is important to note that, because it is a meditation app, the author does not preclude the possibility of the user closing their eyes as a part of the experience of the beach space. This is an essential part of meditation and the user must feel at liberty to do so and still remain a part of the environment.
(4) The physical object of a hammock could be integrated into the experience. I think this raises difficulties in computer animation and also physical issues with lying down with the HMD. A quick idea for the reversal of the latter is to present it in an exhibition space where a special headrest has been created to cradle the back of the head and diminish discomfort.
Enough about the VRE thus far. Admittedly, I have not given much thought to MR before this unit, and for some reason its applications come less naturally to me. Nevertheless, my idea for the MR application is designed for visualization techniques in the treatment of phantom limb pains. The idea is to create an app where the person missing a limb may visualize said limb and touch trouble areas that would light up. The user may then have the option of “treating” them through massage or other purely symbolic techniques. I need to do more empirical research on visualization techniques for phantom limb patients, and that is surely to come. My concept for this section is inspired by my belief that visualization is a powerful technique that can rewire perception of the human condition. Let’s see if this idea survives my research or if it is forced to evolve to something more useful and/or purely amusing.
That’s it for Week 1, stay tuned to see where these experiments may go. Be sure to check out my post on Unit 2 coming this week, as well, since it tackles the story-telling aspect of more cinematic VR. Safe travels to all virtual heads!
Hi all,
As of this week I am beginning my journey to artistic VR. As part of my year and a half MA program on VR, I will document my projects on a weekly basis. This will serve two purposes: (1) evidence to my instructor that I am doing the required coursework and (2) provide a future reference for myself and other VR creators on our way to defining this new medium.
I choose the symbol of the Datura to accompany my endeavor. The obvious reason is my love for flowers and how they hold in their blossoming a universal truth of growth and fruition. The less obvious reason is how they too, like VR, hold both a great potential and a great danger. The Datura flower’s poison, in the best of cases, blurs the boundaries between fantasy and reality. In the worst of cases, it can be lethal.
Join me on this tightrope between the planes of intoxication and sobriety so that we can begin to fully understand how VR can illuminate perception as it stands today and/or shape it for the future.