Team Members: Ianto Xi, Tatiana Ferreyra, David Ju, Josef NunezIntroduction and Mission Statement
Our mission is to provide an interesting, yet informative video that will demonstrate the purpose of our application. We hope that our video will answer any questions that the user may initially have about the functionality and usability of our application. We also aim to provide a clear demonstration of how the application may be used during an average day and how we expect the user to interact with it. In order to achieve this mission, we have divided the work amongst the four of us (Ianto, Josef, David, and Tatiana). Josef posed for our pictures, while Ianto edited these pictures and organized them. David put the video together and wrote a rough script for our video, and Tatiana made the final adjustments to the script and narrated the video. In addition, Ianto wrote the implementation strategy, Josef wrote the prototype description, David wrote the video prototyping description, and Tatiana wrote the introduction and mission statement.
Our prototype demonstrates a user inputting travel information, inputting sleep schedules, selecting a sleep strategy, executing a sleep strategy with guidance of watch notifications, creating a review for a sleep strategy, and viewing a past sleep strategy review. These are the app's prominent tasks.
1. The app opens with the above splash screen.
2. The user inputs his travel information. User chooses between two methods of input: (1) entering a current city and destination city, or (2) entering a travel date and flight number.
3. The user inputs his current and target sleep schedules. The user provides the times he regularly wakes up and goes to sleep in his current city. Then he provides the times he wants to wake up and go to sleep while at his destination city.
4. The user selects a jet lag prevention strategy. A short description of each strategy is provided to the user so he can make an informed decision.
5. The chosen strategy is displayed on the phone as a sequence of graphs. The graphs consist of the time of the user's current location, the time of the user's destination, a sine curve representing the daylight cycle of the destination time zone, and a striped area designating user sleep time. Each day has a unique goal, which is the striped area of time for which the user should be asleep. If the user sleeps in this allotted amount of time each day, then he will successfully adjust his sleep cycle to the time zone of his destination by his day of departure.
6. The user is sent notifications reminding him of specific actions to take to keep on track with his sleep strategy. Actions currently include when to wake up, when to go to sleep, when to take melatonin, and when to change the user's exposure to light. Further actions may be added with additional research on successful jet lag prevention tactics.
7. After the user arrives at his destination, the user has the choice of writing a review on the effectiveness of the sleep strategy he used. This review is stored in the app for future reference.
8. The user can view his past sleep strategy reviews.
_______________________________________________________________Discussion of Video Prototyping
First and foremost, we had to decide whether we wanted to use still images, live recorded segments, or a combination for both. To properly represent the functionality of our app, we had to be able to show the user in different locations (airport, different countries) that were not accessible to us, and also display our prototype interface screens onto the mobile and smartwatch device. Such tasks are too difficult for us to edit with live recorded segments, so we opted to use just still images since it was much easier to Photoshop a user in different locations and Photoshop our prototype interface screens onto the actual devices. Thus, after creating a storyboard for our video, identifying the tasks we wanted to portray, and drawing a list of all the different pictures that we needed to take to represent each scene, we had a member of our team act as the user and took pictures of him performing various activities we needed for our video. The pictures were then Photoshopped accordingly to what was needed, and was organized by scene and by order of appearance. From these images, a rough script for the narration was produced as well as a preliminary video to serve as a baseline to work off of and improve upon. Our designated narrator then went over the script and video, made final changes to the script, and communicated to our designated video editor which images in the video needed to have a longer or shorter duration. Once the video contained the proper timing of all images, our narrator recorded her narration, and our video editor added the narration to the video, creating the final video prototype that was published.
We did not come up with any unique techniques, but something we saw from another video prototype we thought was really neat was how a series of still images were rapidly displayed in succession to create the impression of actual movement. In fact, some of our members thought that those particular scenes were live recorded segments at first, until they watched it again and realized that the images were instead animated to produce the illusion of movement. Clearly impressed, we also decided to incorporate this element into our video.
The good aspects of our video prototyping technique is that it was relatively easy to accomplish, since all that is required is the use of still images and Photoshop to produce the pictures we needed to represent the functionality of our app. This is much more efficient and far less time consuming to using live recorded segments, which cost a lot of time to capture (due to having to travel to appropriate locations and often doing multiple takes), and even harder to edit if special effects are required (such as projecting the prototype interface onto an actual device in the video). With live recorded segments, the narration must also be adapted to the length of the pre-recorded video segments, or vice-versa, potentially resulting in rushed delivery or empty moments of silence where timings don't match up, but still images allows us to be more flexible in that we can easily adjust the duration of particular images to match the narration to create a more fluent presentation.
The difficult aspects of our prototyping technique was in regards to composing the video. We had one member responsible for the narration and another member responsible for putting the video together, and our initial plan to produce the video was not efficient. We originally decided for our narrator to create a narration based on the images that we gathered, with the intent that that the video editor could insert her narration and adjust the image duration based on her narration afterwards, but we did not consider how the narrator would convey her intent (such as which part of the narration is for which image) to the video editor. Thus, we had the video editor compose a video with estimated image duration to serve as a guide, and intended for the narrator to use the guide video to record the narration and tell the video editor to shorten or lengthen the duration of certain images, but it was could not be done over the web as we hoped since aspects regarding the specific timing and duration of images is much easier to discuss in person. Thus in the end, the narrator and video editor met in person to work on the video, which made communication and the production process much faster than how long it would have taken if they had tried to work remotely.
_______________________________________________________________Smartwatch Interaction Strategy
Sleep Sentry is very capable of being a reality. Most of the logic of this application is very simple, and only involves simple arithmetic. Timezone calculations may be be difficult to calculate and understand, but we plan on using the TimeZone Android object to simplify many calculations. Retrieving flight data from day of flight and flight number will require outside services, such as FlightXML, PlaneXML, or FlightStats.
Most graphical interface elements will be simple to produce. There may be challenges making the graph that represents daylight cycles. We may need to use one of several graphing libraries available, including GraphView, AndroidPlot, among many many others.
Watch interactions are similarly simple. The watch will serve as an input, in a way, by measuring heart rate, ambient light, and motion, all sensors that are available in the watch. Users will not input much textual information into the watch. What remains to be seen is if these sensors are fully active when the watch face is off. As an output, the watch will just provide reminders.