We used a combination of generative and formative research methods to help improve the user experience for openBAS, an open source building automation system currently under development at UC Berkeley. In conjunction with the client, our team decided on two objectives for this project: clarifying the target market, and prioritizing features. The research methods utilized during our project consist of heuristic evaluation, interviews, usability testing, diary study, card sorting, and participatory design. In this paper we explain our research methods, findings and recommendations.
EXECUTIVE SUMMARYThis report outlines the research methods and findings for the OpenBAS usability study conducted as a final project for INFO 214: Needs and Usability Assessment, Spring semester 2015.
OpenBAS is an open source building automation platform created by EECS Professor David E. Culler and his team at the University of California, Berkeley.
Over the course of the semester our team conducted a heuristic evaluation, diary study, and usability tests on the current web interface features and functionality. We also conducted interviews with market experts, software engineers, current users and potential users of the system.
With the initial stages of our research complete we were able to make UI recommendations using card sorting and participatory design methods. These methods were conducted with current users of the OpenBAS web interface.
PROJECT BACKGROUNDOpenBAS is a powerful and extensible open source system for integrated networked building systems, designed to overcome limitations of existing building automation software. Commercial buildings have huge sensor networks, currently composed of temperature sensors, electrical sub-meters, fans, air dampers, light relays, pumps and valves. With the proliferation of more devices such as security, electrical, appliances, entertainment, etc., collectively referred to as the “Internet of Things”, there will be an increased demand for tools to integrate these devices.
Existing building management devices are vertically integrated with little interoperability. Modern networked devices offer web service API’s, but they are all different even across models from the same vendor. OpenBAS removes device compatibility as a primary concern.
The existing implementation of OpenBAS was designed for small to medium commercial buildings, however, the system can scale up or down to potentially tap into markets for residential and larger commercial buildings, paving the way for the first generation of building operating systems. The hardware components can run on an enclosed system on the premises or connected systems via the Internet.
There are currently two deployments, PlexiBAS and CIEE. The CIEE is the focus of our usability assessment, a medium commercial demonstration in the CIEE facility. The second floor of a downtown Berkeley historic building (1935), 7500 sqft. There are 5 HVAC zones, lighting control, environmental sensors (carbon dioxide, etc.).
The interface was designed to be accessible via web application, tablet, phone or standalone system interface. The web interface which is the focus of our study was created to be highly customizable using the meteor framework over the OpenBAS API. This first iteration was admittedly designed for the software engineers monitoring the pilot and not necessarily for the potential end users of the system. Although the interface is a huge leap forward with respect to other BAS, it is still early in development.
The Display tab which we will refer to as the dashboard is an integrated view of HVAC, lighting, and miscellaneous controllers. Additional tabs allow the user to set schedules and building zones and view device status and monitor activity. A unique and pivotal feature of OpenBAS is the automaping process. Once a device is plugged into the OpenBAS building LAN is will be noticed by the OpenBAS discovery service and is propagated on the interface Status tab.
The web interface provides access to advanced zone controller, advanced scheduler, inter-zone controller, personal environmental control, prognostics and diagnostic tools. Users have the ability to turn smart devices on/off, set schedules, calibrate settings, and view analytics.
Design and Analysis of an Open Extensible Platform for Networked Building Automation Systems - David E. Culler
RESEARCH OBJECTIVESThe OpenBAS client agreed to two research goals for this project:
Clarify the target customer and user. The OpenBAS product was developed based on the hypothesis that, as “smart” devices proliferate in homes and workplaces, it will become desirable to have an integrated control system that unifies disparate systems and vendor ecosystems. There are several potential customers for such a product, but the client has not devoted effort towards customer development and segmentation. Our first goal was to help the OpenBAS team understand who the target users are and what their needs are as it related to building control systems.
Feature prioritization. OpenBAS contains a wide range of features already available to the user with many others that could be implemented in the future. Our second goal was to help the client identify which features users found to be the most useful.
DESCRIPTION OF RESEARCH METHODSSampling and Recruiting: For the purposes of this project, we chose to focus on understanding the needs of office managers, as these individuals represent a community of probable OpenBAS users. Through our user interviews, we identified other potential user segments, which represent opportunities for future exploration. Recruitment was challenging due to access limitations to office manager users and representative proxies - beyond the process of identifying potential users, we experienced low response rates to our inquiries. Given additional time for the project, our top priority would have been to substantially bolster our user sourcing and recruitment activities.
Heuristic Evaluation: Although there seems to be much debate over the usefulness of heuristic evaluation, we felt it was a good starting point for our assessment of OpenBAS. Knowing that we would be conducting interviews and usability tests, familiarizing ourselves with the OpenBAS web interface seemed to be a crucial step.
Interviews: We interviewed a total of six individuals: two current and former OpenBAS users, two proxy users, and two expert users in the form of members of the OpenBAS development team. Input from the expert users was used only for project background and context. Input from the current/former users and proxy users was used to formulate insights into user habits, needs, and pain points. We did not utilize a formal interview guide for our interviews but we did develop a set of questions prior to each interview based on the context of the interviewee.
Usability Testing: We performed usability tests with two individuals, both of whom have some familiarity with OpenBAS. One user is a novice user who rarely interacts with the product, while the other is more experienced, having used the product on a daily basis over a period of several months. Both users are office managers that fit the project’s target user set. The varying levels of expertise of our two users allowed us to assess whether certain tasks become easier to perform given increased exposure to OpenBAS. We developed a set of five tasks that we presented to each user at the beginning of the test. For each task, we asked the user to narrate her thought process as she worked to complete it, then to rate the overall difficulty of the task upon completion. Following the usability test, we conducted a brief interview to better understand the user’s experience with the product and how it does or does not align with her needs as an office manager.
Diary Study: To gain a better understanding of an office manager’s daily building management activities, we decided to conduct a diary study with the CIEE office manager. We would have liked to have a sample size of five people or more for the study; given our recruiting difficulties, we decided a diary study with a single participant would still provide us with valuable insights.
Overall goal: To better understand what the office manager currently does to maintain the office.
Subgoals: To understand:
- What building management tasks she performs?
- What are the pain points?
- How much of this is accomplished through openBAS?
- How much is through traditional interfaces?
Duration: One week (4/10/15 - 4/17/15)
Format: Paper log sheets that we provided to the manager at the beginning of the study. (Although using electronic media would allow us to view the manager’s entries in real-time, we decided against it because it would take longer for the manager to retrieve than a paper log, and as a result might decrease the manager’s logging accuracy.)
Analysis: The manager logged 12 entries in total. Among them, 10 were for turning on/off the lights in the office hallway. One entry was for turning up the heat in one building zone, and another one was for turning down the temperature in the conference room. All tasks were performed on the physical interfaces (switches for lights, thermostats for temperature).
Card Sorting: Informed by the above studies, we moved on to redesigning the web interface. Our previous studies showed us that one major issue with the current design was it had too many features that ended up confusing the user. So the first thing we wanted to find out was the primary features an office manager would need. To answer this question, we did a card sorting session with the CIEE office manager. We designed a two-tier card system. Tier-1 is the overall feature categories:
- Control Temperature
- Control Lighting
- Monitor Energy Usage
- Set Building Schedules
- View when a device was last detected
And under each category, there is a list of tier-2 features. See Appendix D - Card Sorting Design and Result Design for details.
We generated this collection as a quasi-exhaustive list that covered features available on the current interface, and additional ones brought up during interviews and studies. To cover unlisted categories and features, we also prepared ample blank cards for impromptu writing during the study.
During the study, we first instructed the manager to sort the tier-1 categories in the order of preference, and then asked her to sort the tier-2 features under each tier-1 category. In the process, the manager suggested a number of additional features that we included into the cards. At the end of the session, we had a clear list of preferences.
Participatory Design: Referencing results from the card sorting, we then proceeded to a participatory design session with the CIEE office manager. To prepare, we first selected a list of important features from card sorting and created a collection of pencil-drawn design wireframes, and separated them into moveable module. During the design session, we showed the manager each design module and asked for her critique. Then we asked her to arrange the modules to form a layout she would prefer.
The following design challenges surfaced during the session:
- Large number of data to show and parameters to adjust (e.g. zone name, current status (heating/cooling/idle), current temperature, connection status, set target temperature, set time in effect, show confirmation. )
- No proper naming system for locating lights (needed for controlling individual lights from the web interface.)
After the session, we initially experimented with creating design improvements to make the interface less cluttered while still maintaining the granularity of data and controls. It turned out to be a challenge. Then we thought back at some comments the manager made during the design session and had a sudden realization. When asked how she usually determined what temperature adjustment to make when someone complained, the manager said “usually a couple degrees initially, and if someone complains again, adjust one more degree.” If this is how the manager adjusts the temperature, it seemed that a computer algorithm could do equally well, if not better. When asked about her management of the lights, she said “other than turning on/off the hallway lights (the only lights that did not have sensors and not fully automated in the building), I never touch the lights.” Then when asked if she would use the lighting zone control, she said “No.”
These comments made us realize that granular control of the system may not be desirable to office managers. Instead, they may want to issue a simple command and let the system decide the precise parameters.
This realization led to our findings.
RESERCH FINDINGSOpenBAS Usability Assessment Final Presentation
Heuristic Evaluation: We identified numerous issues during our evaluation, many of which were also problematic for users of the system. The main findings are listed here:
Visibility of system status
- Inconsistent and confusing system response for selected operations
Match between system and the real world
- Information conveyed to be read by a software engineer, not end user
User control and freedom
- With no menu hierarchy, easy to get lost after clicking on links
Consistency and standard
- Different icons, text and colors used throughout the UI for different operations
Error prevention / Help users recognize, diagnose, and recover from errors
- Can delete crucial components with no warning and no easy recovery
Recognition rather than recall
- Multiple ways to locate key operations
Flexibility and efficiency of use / Aesthetic and minimalist design
- Many nested screens and use of links
Help and documentation
- Very little if any help features or documentation
Key Take Away: Users are uncomfortable exploring the interface. The lack of error prevention, recovery, and little documentation - makes the user uncertain what operations will damage the system. The nested screens and links with poor navigation tools make it easy to get lost and difficult to remember where you previously found a key operation.
Interviews: We identified a number of potential customer segments through the interview process. For example, one of our user proxies alerted us to nuances between class A and class B commercial buildings, and also suggested that, depending on the building, the property manager and not the office manager would be the end user for a system for OpenBAS. We were unable to explore these segmentations further due to the aforementioned access and time constraints. With regards to the office manager segment that we did focus on, we found that the control of HVAC and lighting systems were low priorities in the context of their everyday jobs. Existing manual methods of controlling these systems adequately perform the required functions. Interviewees did indicate that the concept of a digital dashboard would be appealing if could present status from all systems and provide control; however, the OpenBAS controls would need to be easier to manipulate than the existing manual solutions.
Usability testing: Our usability tests confirmed many of the findings from our earlier heuristic evaluation. First, the UI occasionally hangs up for periods of up to two minutes, preventing the user from performing any actions; this issue came up several times in both usability tests. Second, the UI is unintuitive - the users had difficulty navigating the interface, could not easily distinguish interactive elements of the UI from static elements, and did not understand the value of many of the features.
Diary Study: We made three observation about the manager’s building management activities:
- The manager did not perform many building management tasks over the week.
- The manager opted for the physical interface to perform tasks that the web interface was capable of.
- The lighting control tasks were very quick (“took a second”), but the AC adjustments took longer (“took ~5 minutes”).
Card Sorting: We made the following observations:
Temperature control is the most important feature, followed by lighting control.
Monitor energy usage is desirable but is challenging to present in an understandable and actionable way for a building manager (in fact, the manager didn’t like any of the tier-2 features we provided). This seems to be an area that should be automated, either fully or to the extent that actionable items are provided to the manager.
“Set building schedule” module is important but does not need to appear on the home page.
“Monitor device connection status” module should be integrated with temperature and lighting controls.
Some Important features are currently overlooked:
- Label devices in a locatable way
- Inform users which devices are controllable by OpenBAS
- Allow users to sync thermometer dates in an easy and reliable way
Participatory Design
Temporary temperature adjustment needs to be better automated. Implement an algorithm that automatically adjusts temperature when an adjustment request is receive, and develop an interface that handles adjustment requests.
The status of user interface should be secondary to automation. User should only need the interface to supplement information to assist automation, and to do manual adjustments when automation fails to deliver. We designed a new interface mockup (Lo-Fi Interactive Prototype) and will conduct more usability testing on it soon.
To enable the mockup’s interactive feature, download the file to your local computer.
Usability testing on a software system that does not have a clearly defined target market is very difficult. Ideally, we would have kicked off the project with basic market analysis; investigating competitors and substitute technologies, researching device suppliers and their market in this space. We wasted time working concurrently on identifying the business drivers and conducting the usability assessment.
The OpenBAS team has a great sales pitch, and they definitely won us over with the promise and benefits of this system, but it is difficult to offer firm usability recommendations for a product that doesn’t have a defined target market, much less a target user. Our team would have benefitted if we had taken the time to research other technologies that has similar beginnings. This project was unfortunately a good example of how a lot of software is actually developed. This study helped us see the benefit of having an interdisciplinary team collaborate on the development of a product.
Regarding the actual implementation of our research methods, the research methods we used all felt appropriate to the situation. Completing the heuristic evaluation early in the project helped us put user feedback into perspective before we initiated user interactions in a significant way. There was some apprehension in our group about the usability tests, due to the time demands and impositions on our users, but our two testers ended up being very helpful and seemed genuinely happy to help us. The development team also seemed genuinely interested in our feedback and we are hopefully that they will use our efforts to make targeted improvements to the interface.
Our group quickly discovered significant challenges associated with recruiting over the course of the project. Generally, our approach was to talk to as many relevant people as possible in order to produce ample sample sizes for each of our research methods. We found that we learned something new about building management from every interaction with a user or a proxy user. However, due to time constraints and issues with access to the community of individuals (office managers, property management companies, industry consultants and experts, etc) who were best suited to help us answer our research questions, we had to settle for smaller sample sizes than desired. We also collected a number of additional contact leads over the course of our research activities, many of which we were unable to follow up with. More flexibility in the project schedule would have allowed us to prosecute more of these leads, which in turn may have resulted in richer insights.
When it came time to analyze the qualitative data we had collected through our research methods, we found that findings synthesis is a process wherein results can emerge in unexpected bursts. At times, we did not immediately understand or appreciate certain information that we acquired from our user interactions. Occasionally, this translated into frustration within our team from being overwhelmed by the flood of information collected from studies but unable to develop concrete ideas from them. At some point, however, insights unexpectedly emerged after follow up user interactions and/or additional time to absorb the information in our heads. To facilitate synthesis, it was helpful to summarize some learning points or hypotheses after each study, and try to validate/invalidate them in the next study.
Talking aloud to others about my learning and ideas helpful as others often bring in different perspectives that helped catalyze the synthesis process. Constructing a framework of some kind is also helpful for guiding the synthesis process. But it is important not to get married to a certain framework, as they may change over time.
Particularly for certain research methods, deciding how much structure and guidance to give to the user was a balancing act. If users are given no structure, they may not be able to give much feedback. But if users are provided with too much information, there is some risk that the participant will follow the lead of the facilitator and thus provide inaccurate or biased information. This issue came up during card sorting and participatory design. Our facilitator provided the manager with too much structure during participatory design by providing a long list of potential features, unconsciously embedding the assumption that managers wanted fine level control. The manager went along with this bias, even though it later turned out untrue.
A good way to battle against bias is to employ an iterative process. After designing a study, try to spot the implicit assumptions embedded in the design. During the study, confirm with your user if these assumptions are valid. Then go back and update your assumptions. After cycles of such study, the assumptions will become closer to the real user experience.
Finally, it is worth noting that our project plan evolved over time based on our findings and based on which activities we felt would add the most value at that given time. For example, we started the project under the assumption that we would initially conduct a survey using a list of e-mail contacts for our user community. When it became clear that this e-mail list would be more difficult to generate than anticipated, we found the need to adjust our plan by instead moving into heuristic evaluations and interviews. Other research methods we employed - card sorting and participatory design - were not in our original project plan but we nevertheless decided to implement based on our initial findings from interviews and usability tests. Flexibility was a key ingredient in answering the high-level research questions as best as possible. It is necessary to have a plan; however, no battle plan ever survives first contact.
APPENDIX
Comments