Chapter 13, An Evaluation framework
The DECIDE framework is presented in this chapter, and that was according to me really helpful. The list describes the issues you may consider before conducting an evaluation. If you go through this list of issues, you have a good framework for the planning phase of the evaluation study. But it's not a list from start to end, you can take them in any order and iterate it, because each item in the framework are related to the other items.
D - determine goals. Having the goals set helps to determine the evaluations scope, and therefore it is an important step in planning an evaluation. One of the first goals mentioned in the book is to “check that the sketch indicates that designers have understood users’ needs”.
E - explore questions. It is important to understand why, and dig deeper into questions about why people do a certain thing. In the book they take as an example why people don’t use E-tickets, and in our case we could think about why people don’t use the traffic information applications that are already on the market. Is it because they are not trustworthy or is it just because people don’t know they exist.
C - choose approach and methods. They say that it is good to choose different methods because then you get a lot of different types of data, from many point of views. And that this variety of methods resulting in a broad picture of view tells us “how well the design meets the usability and user experience goals that were identified during requirements gathering”.
I - Identify issues. A issue can be finding users, not having the facilities and equipment, schedule and budget constraints or the lack of expertise. In many evaluations you get paid for participating, but in a small project this can be very costly. Being prepared with facing issues may help you avoiding them!
D - decide how handle ethical issues. You should be able to guarantee the participants to stay anonymous, and the personal records about the participants should be confidential. In the literature they write about what you should think about when having people participating in evaluations, but I think most of it is common sense, not leaving any personal information about a participant so that it could reveal their identities and so on.
E - evaluate, analyze, interpret and present data. After all questions above, there are still some more that needs to be asked, like is the method reliable? Valid? Is it affected by biases? I think a big problem can be that you are having too much biases about things, Experts can therefore miss something in their evaluation, because they think that it is not that important. Or if you are interviewing someone your tone of voice or face expressions can influence them, so it is important to think about that while performing interviews.
Chapter 15, Evaluation: inspections, analytics, and models
Focusing mostly on two different evaluation methods.
First on is heuristic evaluation, where you have a list of “usability principles”, which you are evaluating your system with, iterating the system over and over again until you revealed most of the usability problems. They list 10 different heuristics, and here I liked so many of them, which are important and not to forget about. One of them is User control and freedom that reminds you that it is good to have a “go-back-button” if the user want to leave an unwanted state. Aesthetic and minimalist design is something that we really have been trying to use in our design, and it says that every extra unit of information competes with the relevant units information and diminished their relative visibility. This is why we often asked ourselves the question, do we really need this? Is it really important?
Another thing that we been talking about in this course is also mentioned here, that it is important to follow the standards, so that icons and functions works in a way they already are used to.
In the text it says that this needs experts, people who are specialists who acts as users and then offer their opinion.
The main problem with this as I understood it is that you don't need one or two experts, you will need many of them and that is expensive.
The other type they focus on is the different types of Walkthroughs, and they are good for evaluating smaller parts of the project, and you can have both Cognitive and Pluralistic Walkthroughs. The book says this about walkthroughs "Walkthroughs are an alternative approach to heuristic evaluation for predicting users’ problems without doing user testing. As the name suggests, they involve walking through a task with the product and noting problematic usability features".
According to me the Pluralistic Walkthrough should be more effective, because working in a group often deliver a better result than working alone. This is what the book says about Pluralistic Walkthrough "In a pluralistic walkthrough, each of the evaluators is asked to assume the role of a typical user. Scenarios of use, consisting of a few prototype screens, are given to each evaluator who writes down the sequence of actions they would take to move from one screen to another, without conferring with fellow panelists. Then the panelists discuss the actions they each suggested before moving on to the next round of screens. This process continues until all the scenarios have been evaluated". For me this sounds like a really effective way evaluation your prototype, but at the same time really costly. So for smaller projects it might be more realistic to have one person or two persons performing the walkthrough.
Inga kommentarer:
Skicka en kommentar