Lets make a prototype!
onsdag 14 oktober 2015
tisdag 13 oktober 2015
torsdag 8 oktober 2015
Feedback recieved in ÖVN4
Today in övning we recieved feedback from group B3:
- How subtle can our implementation be and still be accesible when the cart is very crowded. Also, make it not disturbing when the cart is near to empty.
- Could the walls be used instead of the floor for better visibility?
- How does the solution affect people under the influence of drugs and/or alcohol? This might be the case after concerts.
- Is it worth it to make the application cater to color blind people if the solution looses too much function for people with fully functional color sight?
- In which ways could the system be used for additional fun purposes?
- Is the system clear enough or it's own? Does the user need additional help?
- In winters the floor will get filthy. How would this affect the system?
- Consider working with the sound space as well.
- Consider color temperatures shifting for time of day and seasons.
- The sketches were a bit unclear in what they were actually showing.
- Nielsen heuristics were hard to apply to our concept.
- Could we use targeting towards specific individuals in any way?
Reading seminar 2
After tuesday morning’s discussion at our group meeting about the project, it took us a while to steer back towards our readings. We started to discuss the chapters and the key points along with the questions we had. Victors wondered if we can we use hedonic heuristics to evaluate our product?
The answer is probably yes, but it's not that simple. We need to establish what our goals are in terms of level in the project, and this was something both Linnea and Emma had thought about in their questions regarding the chapters. Emma thought the iterative process was an interesting starting point since it’s easy to think of the design process as linear, even though it’s more complex, and wondered if you keep finding new requirements - when do you convert them into heuristics? Linneas question was somewhere there along the same lines: will we be able to establish what we need to reach with our requirements? How will we know when we're done? Some of the subjects we handled during today’s discussion were:
- Hedonic heuristics
- Cognitive walkthroughs
- Aesthetic design to change behaviour unconsciously
- Are rewards and feedback is important here?
- Expert input, previous work done on colors
- Design for appropriation
- Other uses than the one initially presented
- We’re not afraid to evaluate and reevaluate
Ariel thought about the evaluation process and stated the following: Makes you think about what an ideal way of testing our concept would be if we had unlimited $$$ and time? The group agreed on that this is a super interesting thought. This also lead to discussions about what we could do with our idea if we had this kinds of resources. It let us think about the craziest things for a short time which was fun and inspiring.
We then continued going through the questions everyone brought with them and Frida told us she liked reading about media ethics which we discussed for a while. We came to the conclusion that this would not be a problem in our process since our idea doesn’t depend on any stored user data. It is still a very interesting topic though that’s alway good to have in mind. Frida also presented the method of using math as an evaluations to the rest of the group. This made us think about if there could be some kind of math or statistic that could help us figure out how to design the lights in our product to fill the purpose we want? Also colors, intensity and placement can be crucial when designing the lights. We all agreed this is something we need to look into more deeply.
The discussion went on talking about Josefine's question about whether we need help from experts, or if we want to involve the users to make our product fir our goals and requirements? We connected back to the cognitive walkthroughs which is probably a very reasonable alternative for us. Around now we realised time was running out, but we all agreed on that this is something we need to elaborate on!
NEW PERSONA! Meet Viktoria, 27 years
Viktoria is our newest addition to our personas, and represents another part of our target group we felt we had to add to the mix. We've switched our previous personas, and now Peter is our primary persona, whilst Johan is secondary. Viktoria is somewhere in between the two.
Viktoria is 27 years old and lives in Stora Essingen. She works in construction and normally drives her car to and from work. She doesn't really pay much attention to rush hour traffic, but she does own an SL-card and goes by the "Tvärbana" on her way to and from one of her biggest passions in life: rock concerts. She's an outgoing girl who isn't afraid to speak her mind. She's also a very hands on type of person and doesn't really use technology more than her daily dose of Facebook. She's the click bait-sharing type though.
Scenario
Viktoria is going to the most epic concert tonight. Metallica is playing at Globen and she is so excited! She and a couple of friends have had a pre-party at her place and are now headed towards Globen. The train is a little crowded but since Viktoria and her friends are going just a couple of stops they feel no need to move into the train but stays close to the door. At Valla torg some guy (pretty handsome) with a kid is getting on the train. Viktoria sees them trying to get on board. She gets a couple of pushes but whatever. She doesn’t realize that she is being an idiot not giving people the chance to get through. Viktoria keeps talking very loudly with her friends about the concert feeling excited
tisdag 6 oktober 2015
Indirect observations
Yesterday I was to travel home from Pizza Night at a friends place close to Linde which is a station on our route. My way home was to go to Linde and take the tvärbana to Stora Essingen. When I was to check my phone for when I needed to go I saw in our group message chat that Frida had made the observation that the tvärbana was not for service between Gullmarsplan and Liljeholmen due to a contact wire that had been ripped down making the tvärbana unable to function. - Oh no! I thought and checked the SL-app for information and what I saw was this message below:
"Buss between Gullmarsplan Stop Z and Liljeholmen because of ripped down contact wire. The trains are going by ca 10 min traffic"
My interpretation of this was that the bus was going only between Gullmarsplan and directly to Liljeholmen without stopping anywhere. By this time I was already at Linde station waiting for the train so I started to walk to a bus stop to take the bus back to Gullsmarsplan but instead a replacement bus came that took me to Liljeholmen (And all the other stops on the original tvärbana-route) where the functional tvärbana was waiting.
So my conclusion from this random observation is that their replacement system is well functioning but they are not very good at communicating how they have solved the problem for the travellers.
"Buss between Gullmarsplan Stop Z and Liljeholmen because of ripped down contact wire. The trains are going by ca 10 min traffic"
My interpretation of this was that the bus was going only between Gullmarsplan and directly to Liljeholmen without stopping anywhere. By this time I was already at Linde station waiting for the train so I started to walk to a bus stop to take the bus back to Gullsmarsplan but instead a replacement bus came that took me to Liljeholmen (And all the other stops on the original tvärbana-route) where the functional tvärbana was waiting.
So my conclusion from this random observation is that their replacement system is well functioning but they are not very good at communicating how they have solved the problem for the travellers.
The concrete design
This morning we sat down to put down our ideas to paper. This is the concrete design concept that we came up with.
IDEA
With lighting in the train cart floor indicate to travellers if they should move further into the cart or not.
TARGET GROUP
Our idea primarily applies to people traveling in event traffic with focus on those who does not understand that they are causing crowding when they stop right at the doors.
However, we still think this idea also is very applicable in a most general setting.
PHYSICALITIES
We imagine our idea as some kind of flooring which would be interactively accessible to travellers. It should be a very simple design where the system does the thinking for you. As a traveller you should just have to stand at the right place, for example: In the middle of the train cart where usually less people stands. If you are standing in the wrong or right place the system will indicate this with some form of lighting.
INTERVIEWS
Based on interviewees thoughts and views on this route we identified this problem. Most interviewees complained about trains being extra crowded after an event and moderately crowded about half an hour before.
COGNITIVE ASPECT
Since this design rely on quite simple designs it is important to get those right. We believe the design should be backed up by thorough material on how smaller crowds react in this environment, they respond to colours and how this process should be carried through.
Team work
This morning we had a meeting where we took some time to discuss our thoughts about the group and our team work during the self reflection and peer review. The discussion can be summarized as: We think that we have a good diversity of skills in our group and that our team work is working well. We feel like work load is not high enough for us to be using our whole potential, but we think the project will become more and more challenging further on. It will probably also become more fun, since we will work more with our own ideas rather then pure course material. Overall we are very satisfied with the way our team work is working out and we are all looking forward to continue with our design project.
Etiketter:
Ariel Blomqvist Rova,
Emma Klint,
Frida Eklund,
Josefine Möller,
Linnea Holm,
Viktor Gustafsson
måndag 5 oktober 2015
Seminar 2 - Linnea Holm
I think we’re in a great stage of our design project, the stage right before we’ve started to form a more tangible idea to continue to develop our prototype. Obviously, this is the stage where evaluation methods needs to be taken into consideration even before formulating the specifics of our design. The design process is, according to chapter 13, always a process where the designers are working to develop a product that meets users’ requirements, but where we as designers needs to realize that understanding and settling on these requirements are more of a negotiation over time. When the user’s needs are understood, the design reflects this.
Chapter 13 focus on the framework for different evaluation tactics, primarily on the DECIDE-framework, an iterative process where it’s important to go over each step in preparing for evaluation. I really thought these were accurate questions and check points in setting up an evaluation method. Who wants the design, and why? What are the attitudes and how is the feedback during the testing working? What approach should we take on? Chapter 15 however goes into how heuristic evaluation and walkthroughs work. Those two are inspection methods where experts evaluate according to certain guidelines whether the user interface elements conform to certain principles, called heuristics. Cognitive walkthrough evaluates design based on learning experience, also mostly carried out by an expert according to certain principles. Sometimes there aren’t enough resources or time for field studies or user studies, and that where these methods come in.
Chapter 13 also goes into analytics, which is based on users by observation, or other methods, and analysing their behaviour, something that is often used in handling large amounts of data, that can be visualised easily. If our ideas form into what I think it will form into, without getting too much ahead of myself, this is something we probably need to use for our project. We need to think about what evaluation methods to use, and really consider what requirements we want our project to be based on. Will we be able to establish what we need to reach with this? How will we know when we're done?
Seminar 2 - Frida Eklund
I really enjoyed reading this chapters. They were interesting and very hands-on how to do things and what to think about, not only when evaluating but also when designing. It feels like we are one week behind in the design process (not our team, but the whole course) which make the evaluation methods hard to apply to our work. It’s therefore also hard to reflect on which methods we should use, but it’s good though that we have this theoretical base when choosing how to evaluate our future product.
In chapter 13 the focus was user-tests and DECIDE, which is a framework helping you to remember the different thing to think about when constructing an evaluation. They emphasise that the framework is not a list, but that you should go back and forth between the different issues several times to construct the best evaluation study. For example issue no 4, Identify practical issues, affects the chosen approach and method from issue no 3.
Chapter 15 is about evaluation methods where the user are not present, such as inspections, analytics and predictive models. Inspections can be for example Heuristic evaluation, which evaluates the interface to tried priciples. It can be used at any stage in the process, but needs experts (and preferably a couple of them) to be good, which that is expensive. This should not be used instead of user testing, but as a complement. When reading this I feel like you could use it as a checklist when designing, and through this minimise bad interface.
They also talk about analytics, which is basically logging users and analysing their behaviour. The advantage with this is that it is easy to gather big amounts of data, without the users present, and that it is easy to visualise. Though, there might be some ethical issues due to privacy regarding this. Like they mentioned in chapter 13, regarding the second D in DECIDE, our online world develop fast and the ethical guidelines to protect us is taking time. We do agree to user agreements (in walls of text), and therefore accept the use of our data, and it IS effective to analyse and see how the users behave, BUT is it okey? And how much is okey? And what type of data is okey to analyse for what?
In chapter 13 the focus was user-tests and DECIDE, which is a framework helping you to remember the different thing to think about when constructing an evaluation. They emphasise that the framework is not a list, but that you should go back and forth between the different issues several times to construct the best evaluation study. For example issue no 4, Identify practical issues, affects the chosen approach and method from issue no 3.
Chapter 15 is about evaluation methods where the user are not present, such as inspections, analytics and predictive models. Inspections can be for example Heuristic evaluation, which evaluates the interface to tried priciples. It can be used at any stage in the process, but needs experts (and preferably a couple of them) to be good, which that is expensive. This should not be used instead of user testing, but as a complement. When reading this I feel like you could use it as a checklist when designing, and through this minimise bad interface.
They also talk about analytics, which is basically logging users and analysing their behaviour. The advantage with this is that it is easy to gather big amounts of data, without the users present, and that it is easy to visualise. Though, there might be some ethical issues due to privacy regarding this. Like they mentioned in chapter 13, regarding the second D in DECIDE, our online world develop fast and the ethical guidelines to protect us is taking time. We do agree to user agreements (in walls of text), and therefore accept the use of our data, and it IS effective to analyse and see how the users behave, BUT is it okey? And how much is okey? And what type of data is okey to analyse for what?
Seminar 2 - Ariel Blomkvist Rova
I think that this is where it get's interesting. This is where it gets design, testing ideas against the real world.
Chapter 13 introduces introduces guidelines to different evaluation tactics.
Controlled settings involving users, fit for usability testing, for example in a lab environment.
Natural settings involving users, fit for identifying opportunities and establishing requirements.
And finally, any setting not involving users, focusing on using predictive models to identify usability issues that users might have. More on that later.
I found both the case studies to be very interesting. Especially, the in the wild of skiers cause it made use of so many different technologies which were unobtrusive to the user being tested. It also reminded me of an kexjobb-study I took part in this spring. I got to watch a CS:GO stream with an eyetracker which logged how I responded to ads in the stream. Very cool! Makes you think about what an ideal way of testing our concept would be if we had unlimited $$$ and time?
Chapter 15 dives deeper into heuristic evaluation, walkthroughs and analytics.
Analytics is quite self explanatory, the other two more interesting:
Heuristic evaluation is when an expert roleplays as a user and inspects the service or tech from the view point of certain design principles. Of course, this can never generate as specific results as a test involving users, but it's a relatively cheap way to avoid common pitfalls.
Especially interesting for us is that there apparently are many evolvements of the original heuristic framework, which caters to different technologies. In our case, since we're moving in some kind of realm resembling ambient devices/displays, we might wanna look into heuristic evaluation of ambient devices, as well as hedonic heuristics and similar. Another realm we are nearby is emotional design, maybe there is a framework for that too!
Another method from 15 is the walkthrough. The cognitive walkthrough is about analyzing whether or not an task is cognitively easy enough to carry out. This is also most often done by an expert.
Predictive models are mathematical approaches which can be used for example to find out the ideal placement of buttons.
Seminar 2 - Viktor Gustafsson
In the current stage of our project we are just about to start forming more detailed and well thought-out prototypes that are the products of our design iterations so far.
If our prototypes get good responses on thursdays exercise I am really looking forward to start evaluating our work. The whole point of evaluation is to make sure the ideas that been formed is corresponding to what the users want. Different ways of doing evaluations are being discussed in chapters 13 and 15. In chapter 13 the literature focus on the DECIDE-framework which basically is a checklist one can iterate over for planning evaluation studies. In the 15th chapter we instead learn how to actually do heuristic evaluation and walkthroughs. This is also called inspection methods and hints us about the form of evaluation. Sometimes we can’t be in contact with the users so we need to use different methods. One way of doing this “pretending” to be the user and interact with the designs or prototypes. This is quite difficult and the people using these techniques are usually experts. Heuristic evaluation is mainly developed by Jakob Nielsen and has been boiled down to 10 out of 249 heuristics which are to be used when testing aspects of an interface.
Walkthroughs are alternative approaches to heuristic evaluation where cognitive walkthrough mainly focus on evaluating design for ease of learning and is considered undetailed in contrary to pluralistic walkthrough where you set up a whole team of experts, developers and users that with their combined knowledge can examine and evaluate in great detail.
One really interesting form of heuristic evaluation that I liked that the literature brought up was hedonic heuristics which evaluates how the users feel about their interaction. For example, the user can indicate wether the interface or product is making the time spent enjoyable. Maybe this can be useful for us in our project in the sense that the user can reflect on their feelings in correlation with our idea AND while sitting in the train cart or standing on the platform. Can we use hedonic heuristics to evaluate our product?
Predictive models really was an eye-opener to me. Especially Fitt’s Law. I was really fascinated that there were mathematical relationship that could describe why the size of buttons matter and the best spots in terms of usability on an application is in the corners.
söndag 4 oktober 2015
Seminar 2 - Josefine Möller
We are now in the stage where we have a sense of what we are going to do. After last weeks brainstorming we came up with a couple of good ideas that we are going to work more with to see which one to pick.
We will soon enter the phase when it’s time to establish goals and look into questions that follow with them. To do this we can use the DECIDE framework described in the book, chapter 13. This is a phase where you plan an evaluation and give it a good structure. An evaluation is good to get the most out of the design and to really make sure the product fits with the target group.
If we make sure to structure up everything it’ll be easier for us to figure out what kind of evaluation we need do later on. Do we need to get help from experts, or do we want to involve the users to make our product fit with our goals and requirements?
I found walkthroughs being an interesting inspection method. It work so that you go trough a process of completing different tasks at for example a webpage. For every step the user is suppose to take you stop and evaluate to se if the user knows how to do things, what to do next and if he or she did the right choice or not. This would be really interesting to try out. Maybe we get the chance to do that later on.
We also need to have in mind that even though we think that our product works perfectly the users might not get it at all. Therefor I think that even if we decide not to involve them as evaluators right away it will be important that we at some point get there opinions by some kind of testing against reality.
It is important to keep in mind that not even one expert can alone figure out all the features that need to be in the product so that it works they way it's suppose to. There needs to be a whole team for that. Good thing we are a team! I sense greatness coming out of this.
Reflections after exercise 3
This thursday morning (1st of October) the exercise consisted of an epic brainstorming session.
As a task before the exercise we had finished our personas and scenarios which can be viewed in the earlier blog posts. The main point of todays session was to use the results from the brainstorming and bridge them on to the personas and scenarios to get a broader picture of what kind of product we could start prototyping on. We are still trying to keep ours minds open a little while longer just so to get the new ideas to really sink in.
We met up just right after the exercise to summarise and go through the schedule for next week to make sure we have enough time to work on coming up with some final ideas to present.
All of us agreed on that the brainstorming was an efficient way to come up with new angles on how to shallowly conceptualise our thoughts. And we feel quite confident that we will come up with some solid ideas until next time.
One great thing that we really took with us from last week was that we had this short meeting afterwards and we think that it will help us structure up the work in a more convenient and non-stressful way.
Here are some joyful pictures from today:
fredag 2 oktober 2015
Exercise 3 - Brainstorming
Etiketter:
Ariel Blomqvist Rova,
Emma Klint,
Frida Eklund,
Josefine Möller,
Linnea Holm,
Viktor Gustafsson
Seminar 2 - Emma Klint
Before the first seminar I though a lot about requirements and presented the question ”Are a software project ever considered all done?”. The reading to this seminar made me think even more about requirements since they can actually help in a, for me, new way to complete a project.
It’s preferable to make a couple of evaluations during a project, either to test an idea or a user-ready product and everything in between. By doing evaluations designers can gather information about users experience, way of interacting and/or thoughts about the design. These evaluations can be done in several different ways. It can differ how much a user is involved, if a user is involved at all. The evaluation can also take place in different contexts and situation depending on the aim of the evaluation. I think it could take some practice before one knows what’s the right kind of evaluation for a specific project.
One way to evaluate without presence of users is to do an inspection, also known as heuristic evaluation. An inspection is basically an advanced checklist which an expert, who have knowledge about interaction design and different users behaviours, use while going though the interface several times. It’s also good to use different experts to make sure all problems are detected. The bullet point on the checklist are called heuristics and contain a set of usability principles. The heuristics contain specifications about e.g. error prevention and how much control and freedom the user should have.
Back to my question, the requirements for the design can be converted intro heuristics. Isn’t that amazing news? Therefore I’m even more convinced about my previous thoughts of the importance of good requirements. It will help to define a finish line for the design. You can keep a team motivated! There’s a light at the end of the tunnel! My questions for this seminar is however, when do you stop finding new requirements and when do you convert them into heuristics? Is there even a way to see a project as this linear process as I seem to do?
It’s preferable to make a couple of evaluations during a project, either to test an idea or a user-ready product and everything in between. By doing evaluations designers can gather information about users experience, way of interacting and/or thoughts about the design. These evaluations can be done in several different ways. It can differ how much a user is involved, if a user is involved at all. The evaluation can also take place in different contexts and situation depending on the aim of the evaluation. I think it could take some practice before one knows what’s the right kind of evaluation for a specific project.
One way to evaluate without presence of users is to do an inspection, also known as heuristic evaluation. An inspection is basically an advanced checklist which an expert, who have knowledge about interaction design and different users behaviours, use while going though the interface several times. It’s also good to use different experts to make sure all problems are detected. The bullet point on the checklist are called heuristics and contain a set of usability principles. The heuristics contain specifications about e.g. error prevention and how much control and freedom the user should have.
Back to my question, the requirements for the design can be converted intro heuristics. Isn’t that amazing news? Therefore I’m even more convinced about my previous thoughts of the importance of good requirements. It will help to define a finish line for the design. You can keep a team motivated! There’s a light at the end of the tunnel! My questions for this seminar is however, when do you stop finding new requirements and when do you convert them into heuristics? Is there even a way to see a project as this linear process as I seem to do?
Prenumerera på:
Kommentarer (Atom)




















