In chapter 13 the focus was user-tests and DECIDE, which is a framework helping you to remember the different thing to think about when constructing an evaluation. They emphasise that the framework is not a list, but that you should go back and forth between the different issues several times to construct the best evaluation study. For example issue no 4, Identify practical issues, affects the chosen approach and method from issue no 3.
Chapter 15 is about evaluation methods where the user are not present, such as inspections, analytics and predictive models. Inspections can be for example Heuristic evaluation, which evaluates the interface to tried priciples. It can be used at any stage in the process, but needs experts (and preferably a couple of them) to be good, which that is expensive. This should not be used instead of user testing, but as a complement. When reading this I feel like you could use it as a checklist when designing, and through this minimise bad interface.
They also talk about analytics, which is basically logging users and analysing their behaviour. The advantage with this is that it is easy to gather big amounts of data, without the users present, and that it is easy to visualise. Though, there might be some ethical issues due to privacy regarding this. Like they mentioned in chapter 13, regarding the second D in DECIDE, our online world develop fast and the ethical guidelines to protect us is taking time. We do agree to user agreements (in walls of text), and therefore accept the use of our data, and it IS effective to analyse and see how the users behave, BUT is it okey? And how much is okey? And what type of data is okey to analyse for what?
Inga kommentarer:
Skicka en kommentar