fredag 28 oktober 2016

Theme 6 Comments


I think that you bring up something interesting regarding journals’ different preferences for one type of research methodology. There are obviously pros and cons with qualitative and quantitative methods and saying that one is generally better than the other is a problematic position to take. They’re useful for different purposes and can yield different types of results. There can of course be limiting factors that will determine which methods can be used. I guess quantitative methods are regarded as more objective since they rely on mathematical tools and therefore they are more preferable. However, for answering why-questions they might not suffice. So a question to think about could be why should you prefer on over the other? One could also combine these methods to broaden the perspective.


I agree with you that case studies are quite flexible. I do also see a strong connection between case studies and research through design. In both, the researcher seems to have a lot of freedom to shape the research and choose which methods to use. These choices don’t need to be explained to the same extent as compared within a purely qualitative/quantitative study. Often it seems like they do resort to using well established tools. Also as you point out
it’s about exploring a new field. When doing so it seems common to use multiple tools of data collection to reduce the risk of having too little useful data. Introducing new research tools won’t have as severe consequences if there’s enough provided by other tools.  



I was also a bit confused about case studies before the lecture and the seminar. I still thought it was quite flexible when it came to choosing research tools and methodologies but I thought it was very strict that the researcher wasn’t supposed to intervene. However, like we discussed during the seminar, I do agree that as long as the phenomenon isn’t in itself altered, the interventions are okay. But it’s a fine line to cross. In the research on being car free the researchers were balancing on this line. One could argue that they got extra motivation from some of the documentation tools they were provided. That’s maybe not the best example but instead of just observing (which I thought was the rule for case studies) the researchers actually affect the test group. But as long as the influence isn’t too great the study can probably provide valuable insights into the field.




I feel the same way about case studies. The fact that they try to capture real life behaviour and not behaviour in an artificial environment makes it very interesting. This is also why I’m skeptical and critical to researchers intervening in them. The researchers have to make sure that their interventions are not altering the observed scenario too much. Case stuides are very much about exploring, but if you explore something which is made up it’s turning into an organized experiment.



I think it’s interesting, what you point out, that the sample was selected cautiously. I think the fact that they picked highly motivated participants was because it would be hard to find people to voluntarily sign up for a study like this if they weren’t. But this also has implications on the study. The hypothesis that “people will save money by not using a car” will be easier to confirm with a highly motivated test group. A less motivated group might end up using more expensive alternative. So the sample is obviously important for the generalizability of the study. However, generealizing is often not the goal but rather to come up with research questions, as we’ve discussed in class. Still I do think that with a too narrow sample you run a risk of coming up with research questions which are maybe not relevant for a larger population.



What I’ve always found problematic with semi-structured interviews is that it can be quite tough to find patterns between the participants’ answers. Another thing is that you also might come up with good follow-up questions during one of the last interviews, which you wished you had asked during all the interviews. I guess it could be a good thing to do pilot studies in order to analyse what potential follow-up questions might arise. I do agree that it can be good if the participants have the right knowledge, but I do also think that differences in knowledge can yield interesting results. It really depends on what the purpose of the study is. Like with your example about couples, it’s obviously a requirement that they have a certain knowledge. Maybe this is more often a requirement for more specific research questions.


A agree that you can compare witg existing literature and research in order to try to generalize. However, I think as we spoke about in class that a case study is something that you do when the field is new and the existing research on the area is limited. So it could be quite hard to only try to increase the generalizability this way. Better ways I think would be to study more cases and use a larger test group and cross compare these. The main goals with case studies is to gain insight into the field and come up with research questions so generalizability isn’t necessarily a high priority.

You bring up that case studies start with a broad research question and then tries to narrow it down. I do agree in a way, the number of questions doesn’t have to decrease (most likely they should increase) but they should be more specific.

I was also a bit confused that case studies didn’t need a hypothesis when starting out. I was also taught that you need that before. I guess it makes it a bit broader since a hypothesis is usually quite narrow. Research questions gives you freedom to explore a lot more because hypotheses quite narrowed down and confirmed or denied. What I like about case studies is that you don’t set out expecting to find something. It can also make it hard to know what you should look for. That’s why it can be useful to use many different tools for observing and acquiring data. This could also make the data analysis step quite complex. Looking for patterns between cases is one thing you could do but that would require you to look at multiple cases.



Interesting that you bring up population selection. As you say all people are different, so in order to conduct an optimal study you would have to have all people in the world which of course isn’t feasible. But you could also target different groups e.g. a certain age range which would narrow down the population. Still it’s very interesting to read the motivation of how the population was sampled. In my experience, accessibility is a dominant factor but it’s also closely linked to otherfactors such as financing. I don’t know if there exists a standardized way of sampling a population ( there probably is in some sense) because it depends so much on the type of research being done and where it is performed. It’s probably easier to motivate if it’s a certain skill or knowldge required from the participant.



I also find the financial aspect of research quite interesting. I as well have thought about the  fact that qualitative research is more expensive, especially for larger studies. This is a real benefit for quantitative methods and important factor to take into account when designing a study. I wonder if a lot of originally qualitative research ideas are turned into quantitative research because of this.

Final Reflection

In the course we’ve been going through many different research methods: quantitative research, research through design, qualitative research and case studies.

Quantitative and qualitative methods are both referring to what kind of data is collected and processed in the research. Roughly, the main difference between the two is that quantitative handles more data which can be processed by various mathematical tools. Qualitative is then the opposite, generally less data but of a qualitative kind, such as written text which requires more manual processing.

These are quite broad definitions so naturally they can be included in both research through design (RtD) and case studies (CS). What I mean is that both of the latter ones can use quantitative and qualitative methods, at least for the analytical part. The methodology for RtD is mostly qualitative since it’s using common design principles. In CS on the other hand, quantitative methods are also used. These methods share a lot of similarities but it’s useful to point out what separates them. An essential difference is in terms of intervention. Given that design is generally an iterative process, intervention is a crucial part of RtD. Through the design you learn which parts work well and which that are problematic. Having the flexibility to continuously make changes to the design is what makes this kind of research good. You can start out with a basic design and then throughout the process make improvements. In this aspect CS is pretty much the opposite. Intervention should try to be kept to a minimum. One could say that RtD is both about observing and interacting, whereas in CS you only have the observational part. It can be argued that some intervention in CS is fine as long as it doesn’t change the phenomenon which is to be observed. It’s important to point out that if too much intervention is done in CS it actually starts to become more and more similar to RtD or a qualitative study.

Both of these approaches are great for new research fields and when the existing literature is limited. These methods have a more explorative nature than just qualitative and quantitative research where the purpose is to answer research questions. For CS and RtD is more about coming up with new research questions, this is especially true for CS. For both it’s about gaining insight into the field. The choices made in the study generally don’t have to be argued for to the same extent as with classic qualitative and quantitative methods. Usually since they’re mainly about exploration, a smaller test population and a limited set of cases are used. A consequence of this is that the studies aren’t generalizable but that’s not really the purpose either.

So this leads to a question of what method to use in which cases? There’s no straightforward answer here because it depends on a lot of factors. It’s good to start by asking what the purpose of the study is. This can actually narrow the options down quite a bit but there are other important things to think about such as: financing, accessibility of tools and test group etc. Sometimes it could actually be good to use a hybrid approach and combining different methods. Having multiple perspectives is generally regarded as something positive and this is something which you get by combining for example quantitative and qualitative methods; just like you can get by having multiple researcher’s conducting a study.

As I’ve pointed out there’s some merit to both quantitative methods and qualitative methods but it seems that some researchers generally prefer one over the other. Personally I like quantitative methods more and since I will write my master thesis this spring I am looking for a project where I’ll mostly use this. However, this is just a preference because my interests lie more within the hard sciences. An advantage with quantitative methods is that they rely more heavily on mathematical tools which is viewed as more objective (which we discussed during the epistemology phase of the course). Obviously it depends, it’s possible to manipulate values to “your advantage”. But at least quantitative methods become less reliant on the researcher’s interpretations.  However, building theory from only quantitative data can be challenging as in a paper I read during the course where the researchers tried to understand the meaning of numerical ratings on Netflix.

In case studies one can use a lot of different tools for for data collection like we saw an example of during the lecture about the “car free year project” where they used diaries, interviews and more. Using multiple tools isn’t only a good idea for case studies but for research in general, especially if the research questions are complicated. Depending on the context, information might be easier to obtain with one tool than another. A potential problem with using multiple tools like this is that the task of analyzing the data becomes more time consuming.

Despite being more time consuming to process, collecting much data provides a good foundation for building theory upon. When building theory you can combine different kinds of data, it doesn’t have to be strictly qualitative or quantitative data. Actually, for strong theory it can be an advantage to do this. For example quantitative data can support a theory which is easier formulated from qualitative data.

In a dream scenario where there are no limiting factors you could potentially use all sorts of different research methods. However, in a realistic scenario there are always tradeoffs you have to make and prioritize the most relevant things for the research. Thankfully quantitative methods have now become more accessible to people compared to historically. Internet and crowd sourcing have made it a lot easier to spread your surveys. Computers have also facilitated the processing of large data sets. For qualitative data this is still a lot trickier to automize even though some very interesting work is being done in artificial intelligence and natural language processing, so who knows what the future of research will look like.

tisdag 18 oktober 2016

Theme 6 Post 2

Qualitative studies are quite flexible, both in terms of ways of conducting a study as well as flexibility during the study. For example you could do interviews, surveys etc. These could be performed in different ways, much depending on the setting. Sometimes a strict interview is useful while other times a semi-structured interview would be better, as that allows for the test person to expand their answers.

Less flexible methods are in my opinion more suited for confirming or denying one specific thing and you are not interested in deviating from that. More flexible methods would be more suited when you are interested in gaining new insight into a topic. It’s a useful way of coming up with new research questions. This is also emphasized when talking about design through research and case studies.

During the seminar we talked about how multiple sources of data is good for qualitative studies. Being a fan of data-driven approaches I do agree that generally more data is better, but one should make sure it actually adds something to the project and not just making it more difficult to process all data. I also think that researchers should always be able to motivate their choice of data collection methods. In scientific papers I do get the feeling that sometimes people just use e.g. a diary, because it’s a commonly used method. Sure that gives it some validity as a research instrument but it’s still important to motivate what it will bring to the study. It’s worth mentioning that the methods used aren’t entirely up to the researcher, since other factors such as financing can affect that.

Since qualitative studies doesn’t really on mathematical tools in the same extension as quantitative studies does, the researcher’s role is in a way more important, since the study relies on the researcher’s interpretations and analyses. To reduce this influence an option is to have more perspectives (researchers).

Another thing discussed during the seminar was the problem of social desirability bias and that the researcher could influence the answers of the respondents. I do think that this problem is more prominent if the respondent is in a one-on-one setting with the researcher and that the problem will decrease with increased anonymity. Furthermore, if the researcher already knows the respondents, they might choose answers which they believe are more pleasing to the researcher. Thankfully we have tools to combat this issue such as the Internet where it’s easy to spread surveys. However, processing a large number of qualitative answers is very time consuming so the number of participants is usually smaller than in quantitative studies. For this reason it can be hard to generalize the results on a larger population. So just like with case studies the intention is more on gaining insight into topic and potentially coming up with more research questions.

Case studies aren’t actually qualitative but can use qualitative, quantitative or a combination of those methods. However, they resemble qualitative studies more because of the flexibility and intention I pointed out earlier. However, a main difference in intention is that in qualitative studies you usually expect a result but in case studies it’s really more about discovery of an often unexplored research field. Even though it’s not the purpose, generalizing a case study is also not possible unless you consider many cases.

Before the seminar I argued that case studies are observations of a situation which hasn’t been created by the researcher. As discussed during the seminar I think that some minor interventions are okay as long as they don’t change the phenomenon to much.

tisdag 11 oktober 2016

Theme 5 Post 2

I feel like I now have a clearer grasp on what empirical data is. It’s data which has been acquired directly through someone’s observations and experiences. It’s not just any data which has been previously collected. However, this got me thinking about what the opposite of empirical data is. Unfortunately I didn’t get to discuss this during the seminar but I spent some time thinking about it. Opposites to empirical data would then be speculations, hypotheses and basically anything which is unobserved and just analytically thought out. Since empirical data has once been observed it’s also possible to verify the data.

Design work is in itself knowledge contributing. One main thing that should be stressed is that this is almost the purpose of doing research through design. Design work is explorative. It can start out from common design principles but they can change along the way since the purpose is mainly to gain insight into the field where the research is taking place. The iterative nature of design work facilitates this since changes can be made according to observations made in the study. It’s quite a flexible approach as it can be tweaked during the study and not everything has to be thought out before beginning. Instead of having a lot of research questions in the onset of the study, the questions may emerge as it proceeds. Future research topics could actually be revealed as the result of design work. I think that the most important take away from this is that the intentions of design work is different from other research. In design work the intention is to gain insight while doing, and in other research the insight comes as the final results. I do also think that in design it’s not necessary to explain all observations or design choices. It provides a lot of freedom to try stuff out and see if they work. Basically it’s a bit like learning by doing.

I wasn’t exactly sure what was meant about whether design work was ever replicable. In some sense it is. You could use the exact same technology, given that it’s still available. And then from a technical point of view wouldn’t this mean that the study is constructed in the same way? Probably, but a thing that has changed is the setting and the context in which the study is conducted. This could lead to unwanted results even though the study was very much the same. So it’s not only a matter of what technology is used but it’s also about the context of use. One could also think of replication as the methodology of a study being used again. The study could still be a replicate even though the technology has been renewed but the core methodology and structure remains the same (like the sequel to the tangible programming paper). Another thing which I would argue is even more important to discuss than the replicability is if it’s really meaningful to do so. Surely this depends on a lot of factors but I would argue that in many cases, especially in the softer sciences it’s not worth replicating older studies, mainly because the social context has been significantly changed. In general, harder sciences are more meaningful to replicate because they are less dependent on the social environment in which they’re performed. Social life contexts incorporates many complex variables which will influence the scientific results.

söndag 9 oktober 2016

Theme 6 Post 1

Which qualitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?

Paper: A very popular blog: The internet and the possibilities of publicity
Brenton J. Malin

The qualitative methods used in the paper are: analysis of earlier studies and literature as well as examination and analysis of real examples. The author is using these existing studies and combining them with newly studied examples in order to identify patterns in publicity.

It’s an extension of earlier work and the topic already has some established foundation. By combining different studies it allows for a broader perspective in the analysis, which might provide some new interesting insights. Not only does this give opportunities to expand on earlier work but it can also be a chance to replicate a study, at least the analytical part of it. Since this method is more of a observational one, it doesn’t require the researcher to design experiments. Doing experiments can be quite challenging and there are many variables to think about that could cause problems.

Since it’s mainly an interpretive and analytical method, the researcher will (more or less) influence the result. One researcher could see things from a different perspective than another would. A question then becomes: are the sources cited really the most relevant ones? Obviously it’s not that black and white and maybe there could be other more relevant papers. What’s important is diversity. By using multiple sources, you will hopefully have enough perspectives on the subject. This mix of perspectives is also important for building theory and it might help to reduce the subjectivity of the research, through a dialectic-like process.

What did you learn about qualitative methods from reading the paper?

The main take away from the paper is that qualitative methods are quite flexible. The shape of the research is very much up to the author, as long as it can be motivated. In quantitative research there are many standardized ways of performing research and tools to use for analysis. Qualitative research is in a sense less strict and a bit broader. The author’s train of thought must be clearly outlined for the research not to succumb to pure speculation. But it also allows for making assumptions when there aren’t necessarily any hard evidence to support it.

Which are the main methodological problems of the study? How could the use of the qualitative method or methods have been improved?

This study could try to include more perspectives. Sure, other studies are cited but it could for example have included interviews with people to see if the authors point of view is shared by other people.

Briefly explain to a first year university student what a case study is.

A case study is a study where a particular situation is examined. The situation is taking place in a natural setting and the behaviors in it are what the researcher want to study. It’s different from other studies where experiments are set up in lab environments. The latter are quite different from normal situations and it can affect the test subjects’ behaviors. A goal with case studies is that more realistic (unaltered by external inputs) behavior is revealed. It can combine different research techniques and can be qualitative, quantitative or a combination of both. Examining more cases will likely increase the generalizability of the study. It is therefore also important to think about the selection of the case population. Having diversity in the cases can be a good thing since that could shine light on some patterns which wouldn’t have been identified otherwise.

An advantage of case studies is that they are observed continuously. It’s not just collection of answers and statistics, but given the context of a scenario it’s possible to stumble upon new insights. They can also help to explain why some behavior is observed, not just that it is.

Use the "Process of Building Theory from Case Study Research" (Eisenhardt, summarized in Table 1) to analyze the strengths and weaknesses of your selected paper.

Paper: Gang violence on the digital street: Case study of a South Side Chicago gang member’s Twitter communication. Patton et. al (Published in New Media & Society 2016)

Building on earlier research on gang violence, this paper seeks out to understand whether there’s a link between online banging (aggressive and violent behavior) and gangs’ street behavior. The research is motivated mainly by the younger generation’s increased use of social media platforms. The author’s focus is on gangs in the Chicago area because of the many problems they’ve had with gang violence. The sampling of the population is not random. They have carefully chosen to focus their attention on one prominent female gang member. This is obviously a small sample and makes it hard to generalize the findings on a larger population. The method was mainly an analytical one and they chose to look at all the Twitter messages sent by the gang member and messages sent to or mentioning her. Only considering one social media platform is a limiting factor.

In a fresh topic like this it might be a good idea to do a case study, it could shed more light on the subject than limited literature would. However, the analysis in the paper is a bit thin and a big problem with this narrow approach is that it’s not possible to draw any general conclusions. It does however suggest that this topic might be interesting to look into more deeply in future research.

måndag 3 oktober 2016

Theme 4 Post 2

Before the lecture I thought it was a bit tricky to classify a paper as quantitative research since papers almost always include some sort of analysis or interpretation in the discussion. I would argue that this would make the paper a combination of quantitative and qualitative research. However, one should probably disregard the discussion section and just focus on the methodology when categorizing papers.
Quantitative research should gather data in an objective manner and it should be quantifiable. There are different ways of doing this and some might be better and more suitable depending on what’s being researched. The second part of quantitative research is processing or analysis of the data. The tools for doing this come from mathematics, statistics etc.. What’s good about these tools is that they’re often well established and widely used. A thing that’s not as good is that it’s sometimes difficult to understand why they work. I can think of one example which is Word Embeddings. Word embeddings is a method used in Natural Language Processing. It has proven to be very useful for finding concepts and relationships between words yet it’s not very clear why it works, it just does. A big problem with quantitative research is that it’s not straightforward how to map complicated things in life such as human behavior to a limited number of quantifiables.
In qualitative research, the methodology is a bit more liberal. Take for example a survey: the users may be allowed to freely right down sentences or even paragraphs to answer a question. In a quantitative study this would most likely be replaced by let’s say a question with a numerical answer in the range from 1-10. The latter is much more restrictive in terms of content. It’s quite a challenge to understand what the test person meant by answering a 7, compared to an elaborate explanation in plain words. Furthermore, it’s also unclear to the test person what a number in the scale represents. People can have different interpretations of the scales. This was a big problem in a paper I read for Theme 3 where scientists tried to interpret users’ movie ratings on Netflix. On the other hand answers in a qualitative study may be varied and it can be hard to find patterns and draw any general conclusions from them. Even though quantitative methods have their drawbacks, scalability is a major advantage. One numerical answer might not give you a lot of insight but combining a lot of data might actually do that. There are many techniques for extracting information from big data sets and that is in itself a hot field of research.
There are obviously both benefits and drawbacks by using quantitative methods. A solution could be to combine quantitative with qualitative methods. For example, a numerical answer combined with a few sentences could give more insight into what the test persons are thinking. This can also serve as good feedback to see if your questions are well designed.
I’ve been focusing mostly on surveys in this post but there are of course other ways to conduct studies. The main thing for quantitative research is that it needs measurables. Sometimes, depending on the setting it can be harder to find that or it just doesn’t make any sense to quantify some thing.

An important advantage of quantitative studies is that it relies a lot on mathematical and logical tools. Since mathematics is often considered a pure form of knowledge, quantitative methods should be less subjective than qualitative.

lördag 1 oktober 2016

Theme 5 Post 1

What is the 'empirical data' in these two papers?

Empirical data means it’s basically data learnt through experience. Both of these papers use empirical data in order to explain the research topic and what perspective will be taken in the article. In the article by Lundström a few different types of data are used. There was an analysis of discussions on online forums regarding the topic. They conducted interviews with experts, early adopters and people with sufficient experience of driving electric cars. Also a  state-of-the-art analysis was done. This gathered data could then serve as a basis for formulating a problem definition and a starting point of what to focus on during the research. It also used results from earlier studies especially for backing up design decisions. It was mentioned in this paper that they had used empirical data from Nissan and that it was used as a part of the equation for estimating the energy consumption by the car. The other paper by Ferneaus and Tholander also used earlier studies as data for supporting their design decisions and their way of conducting the study.

The aforementioned data was used as input to the studies but the studies also output data in the form of results. The results were mainly new design principles and approaches suitable for the tasks described in each study. As mentioned before these were a combination of prototypical work and design theories.

- Can practical design work in itself be considered a 'knowledge contribution'?

Returning to last week where we discussed different theory types. The last category would fit these articles. Since theories are in a way knowledge, I would say that practical design work should be considered a contributor to knowledge. Also following the papers it is clear that a lot of knowledge is gained indirectly. Serendipity is a word I find suitable in this context. When working towards something you might end up finding something else which is very valuable or even more valuable than the initial goal.

- Are there any differences in design intentions within a research project, compared to design in general?

Obviously design is dependent on the usage and application of the product. Research projects have different purposes than commercial products. Generally in research, focus is more on functionality than on a well polished product. Shortcuts on the design are more accepted if it allows for the study to be performed as intended. For design driven approaches the intention is more on wanting to discover something new. In other research practices it’s common wanting to confirm a hypothesis or something similar. The initial design is therefore not critical in design driven research since it can easily be changed along the way.

- Is research in tech domains such as these ever replicable? How may we account for aspects such as time/historical setting, skills of the designers, available tools, etc?

A big problem with replication is that the technology is constantly changing. The tools we used 10 years ago might not be the same as today. In some cases this can turn a research field completely irrelevant if the technology has been replaced by something else. On the other hand, the methods used in earlier research can still be relevant. The design of the research might be reusable. One must also consider the setting in which the research took place. In a new setting some technologies could be used in other ways or with new purposes compared to previously. Perhaps this can sometimes be accounted for by analysing the historical setting but only to some extent. Studying old studies can be relevant in the same way as studying history in general is relevant for learning what to do and what not to do in the future.

Even though technology and settings change, the scientific method is still relevant. It’s a cornerstone for conducting studies.

- Are there any important differences with design driven research compared to other research practices?

Generally in design driven research the theoretical basis is more limited. In other research practices long theories are being built by analysing a lot of earlier studies. The decisions made in the studies are most often backed up by this. The design choices made in design driven research doesn’t necessarily have to be argued for, but are very much up to the designer. This gives the designer a lot of freedom to under the course of the study continously make refinements to the design.

In other research practices, the design has to be very well defined from the onset. Meanwhile for design driven approaches, the process is a big part of the research.

What are the implications of this? Are design driven approaches looked down on in academia? I would say that these sorts of studies can be good but they do run the risk of not being very useful. They have relatively specific solutions and it might be hard to generalize these and reuse them for future research.

tisdag 27 september 2016

Theme 3 Post 2

Theory is not raw data. To borrow a bit from Kant: “perception without conception is blind, conception without perception is empty.”. Without applying any reasoning or any concepts to the data, it won’t be very useful. Theory is about analysis of data. It’s about linking previous knowledge and research together with newly found data to analyse. Theory is what makes the data relevant. Theory also serves as a cornerstone for us to understand things in life.
I thought it was quite interesting that papers are sometimes rejected on the basis of too little theory. This really shows which significance the theory actually has. The results aren’t in themselves very interesting but it’s when we start to analyse it and try to understand the reasons of why the results came out as they did that it starts to become interesting.
One thing that I’ve thought about is whether theory is qualitative or not. Surely it is, but is it only the qualitative part of for example a research paper? If it was then theory would be subject of the writer’s interpretations, which it is to some degree. Theory is dependent on already established knowledge and it will have observations or data to support it. These observations and data should be captured as objectively as possible. The amount of data backing up a theory is then what determines a large part of the strength of the theory. Combining data from different places to form an argument is also important for the strength of the theory. It’s similar to how cross referencing in journalism will increase the credibility of a news report.
What I think is important to point out is that theory is a layered construct. Theory is built with the help of earlier theories. The wheel isn’t reinvented for every research article for example. What kind of implications might this have then? A naive belief would be that since theories in part comes from earlier theories this will lead to an always expanding theory (if we use the term to encompass all existing theories). This would be wonderful but theories might actually be wrong. Maybe an observation wasn’t actually what you thought it was or a data set was tampered with. You would still be able to theorize on this and might actually come up with a reasonable explanation. Someone could then build his/her research on this and it could potentially start a chain of flawed theory. Obviusly the sooner you discover the flaws in a theory, the easier it will be to retract it and it will face less resistance than for a well established theory. One example would be the theory of evolution which still to this day faces criticism from creationists (though much less than before), even though it has a lot of evidence to back it up.
It’s interesting to see what a big role the natural sciences have come to play in the forming of theories. The scientific community is actually serving as a gatekeeper and it’s deciding what papers and with that theories, that get published. This should result in that only strong enough theory is published.
The notion of theory is not black and white. At first I didn’t really see the point of dividing theory into different types. After discussing this topic more I do think that this classification can be quite useful. Mainly to see that all research doesn’t have the same purpose. Some can focus on only explaining while other research also takes on the task of prediction. It emphasizes that there’s some flexibility in the word theory.

söndag 25 september 2016

Theme 4 Post 1

Paper: The impact of media multitasking on learning. Lee et al. Published in Learning Media and Technology, March 2012 (at the time with an impact factor of 1.03 and today 1.62).

The article’s purpose was to investigate how multitasking affects our cognition and comprehension. Earlier studies had focused more generally on how the brain works so the angle for this continued research was on learning. Previous studies had concluded that there is a limit for how much information we can process simultaneously. Different activities have different amount of cognitional load. Habits for example are much less demanding than learning  a completely new task. So there’s a common belief that the brain is able to create schemas for facilitating processing of a known type of information. Building upon this the researchers constructed an experiment around reading comprehension. A sample of 130 people were chosen to participate in a test. The test was made up of three parts. The first one allowed the test subject to read chosen literature and prepare to answer questions about it. The second one was almost the same but this time a video would be screened in the background, however the subject was informed that the video could be ignored and not be part of the questions. The third part was the same as number two except that the subject was now told that the video would be a part of the following questions. Statistically they couldn’t find any difference in comprehension between the first two groups but the third group was statistically determined to be different from the rest. The test subjects didn’t perform as well and this would then indicate that the cognitional load was higher for the last scenario.

This experiment together with some applied statistical theory were the quantitative methods used in this article. This might be a sidetrack but for me the ending of the article was a bit anticlimactic. I think that it might be a result of the research which is presented in media channels. That research always seems quite sensational (sometimes it’s justifiable but many times not). However, in this case the results were in line with the build up and the expectations and that’s actually quite valuable too, even though it’s maybe not as interesting.
An obvious limitation with their method is that the number of participants isn’t higher. Also, which they pointed out was that the big majority were female. Diverse demographics are important especially if you want to be able to generalize a result and make it applicable to a large part of the population (obviously this would be a massive undertaking). Another limitation is the questions which were asked after reading. Depending on your background you might find some things more interesting and you learn that meanwhile someone else with a different background focuses on other things. With a limitied questionnaire this could favor one of them. Questionnaires aren’t bad they’re practical but not very flexible. Perhaps they could have had a more open (a benefit of qualitative methods) test as long as it was quantifiable (e.g. naming as many keywords in the text as possible).

By using quantitative methods you expect that the results are going to be more objective. Mathematics is considered to be the purest form of logic and numbers are supposed to speak the truth. Of course this is only the premise, it still depends on how you construct your experiments. Fortunately there are a lot of well established frameworks for doing quantitative research. It’s still possible to manipulate statistics to your advantage if you care more about getting published than doing good research but quantitative research is less dependent on the researchers interpretations. This is not to say that there’s no interpretative part to quantitative methods. One example is how one would link a complex behaviour to a set of variables. This can be a challenging task, which was also mentioned in the paper on VR drumming. PCA could be a useful tool to try to distill the most important components.

The article on VR drumming was investigating the perceived ownership of virtual objects. And also if the look and shape of the object had any effect on the immersion which was found out to be the case. It’s quite amazing how one can change one’s behaviour based on an avatars appearance. It’s also interesting that it doesn’t have to resemble your own appearance for you to perceive ownership over it. If VR could allow you to interact more naturally with machines or elevate your performance, it could have huge implications on work and entertainment.

måndag 19 september 2016

Theme 2: Critical Media Studies #2

  1. What is "Enlightenment"?


Before, I thought of enlightenment mainly as an era of skepticism towards, what at that time was perceived as knowledge. Things that couldn’t be explained by science started being questioned. The church and religion as a whole were obviously targets of this critique. The enlightenment might have started like this but it wasn’t limited to just a finite period of time. Enlightenment is rather a movement which is ongoing but maybe not as clearly as before. Today science has almost achieved a status compared to what the church used to possess. One big exception is that science is actually open to criticism. That is actually nurturing for science and it heightens its level of credibility.


  1. What is "Dialectic"?


The definition of Dialectic goes something like: two or more parties exchange arguments with the ultimate purpose of finding objective truth. This is obviously not the same as having a debate, where the purpose would be to just have the best arguments not necessarily more truthful. You rarely see a political debate with compromises. Rhetorical skills are the ultimate weapons. Dialectic should be the premise on which law and politics should be based.


  1. What is "Nominalism" and why is it an important concept in the text?


I’ve gained more insight into what Nominalism is. The main take away for me was that we’re all particulars, there’s no universal grouping saying that we’re Swedish or something else. In ways this can be quite a healthy position to take, looking at how governments have alluded to people’s feelings of patriotism in the past. I would at the same time say that it’s quite boring to reject abstract objects and universals because it can remove the feeling of belonging. The church, by the “creation of abstract objects” was able to establish rules and norms because people felt they had purpose. Sure, they’ve used other methods to but that’s really my point. Nominalism can be both positive and negative, a lot depending on the context.


  1. What is the meaning and function of "myth" in Adorno and Horkheimer's argument?


One often thinks of myth as being in contrast to science. That science shines light on the unknown.  What I found interesting is that science can possibly turn into something like myth. If science only repeats what nature is already telling us, is it that different from what earlier non-scientific theories tried to do? There’s maybe no straightforward answer to this but a key difference I would argue for is science’s usefulness.


  1. In the beginning of the essay, Benjamin talks about the relation between "superstructure" and "substructure" in the capitalist order of production. What do the concepts "superstructure" and "substructure" mean in this context and what is the point of analyzing cultural production from a Marxist perspective?


The substructure shapes the superstructure but it’s really a circular relationship. The superstructure has a lot of power to maintain the substructure. With this in mind and thinking about the superstructure, people who could control or shape the superstructure would be extremely powerful. A large group of people that together might have the power to shape it would then pose a large threat to people in power.


  1. Does culture have revolutionary potentials (according to Benjamin)? If so, describe these potentials. Does Benjamin's perspective differ from the perspective of Adorno & Horkheimer in this regard?


We talked about how the mass production and proliferation of art stripped the Bourgeoisie of its power. This goes to show how important information is. When information is harder to control and you have multiple sources, people can cross compare and they may end up with a more objective view. This is also the role which journalists are supposed to play in our society.


  1. Benjamin discusses how people perceive the world through the senses and argues that this perception can be both naturally and historically determined. What does this mean? Give some examples of historically determined perception (from Benjamin's essay and/or other contexts).


In some sense historical perception is more subjective than naturally determined perception. It’s produced using historical facts but those would also be subject to interpretation, more so than with naturally determined perception.

  1. What does Benjamin mean by the term "aura"? Are there different kinds of aura in natural objects compared to art objects?


Aura is powerful. It can convey time and place and that’s actually quite a lot, because that in turn starts thought processes inside peoples’ minds. As we discussed, fascism politicized aesthetics. Using old art in a new setting can let people make connections that aren’t explicitly but implicitly stated by the aura.

lördag 17 september 2016

Theme 3 Post #1

Selected Research Journal:

New Media & Society is a highly ranked journal with focus on research of theory and practice in Media, Culture and communication. It was first published in 1999 and it’s latest impact factor was 3.110.

Selected Research Article:

Recommended for you: The Netflix Prize and the production of algorithmic culture
B. Hallinan, T Striphas
Published in New Media & Society on June 23rd 2014

The article seeks out to analyse the effects of algorithms on our culture. Algorithms increasingly shape our culture in different ways. An example would be through recommender systems which provides a user with suggestions on items the user might like, be it music, movies or other things. In the article they analyse the Netflix Prize and try to draw conclusions on the effects of this algorithmic development on culture. The Netflix Prize was an open competition proposed by Netflix to improve their recommendation system. The first team to improve their movie recommendations by 10% would receive 1.000.000$ in price money. In the article they examine this competition on an abstract level and look on what effect algorithms have on culture.

Firstly they try to define what culture means. They admit that culture it’s hard to find a clear explanation of what culture is. They use their own as well as external sources to try show what culture could encompass. They further argue that there’s been a recent change in our society with the advent of new information technology. Today, engineers find themselves in a sweet spot of being able to almost define what culture is and in which direction to change it. Algorithms are forces which could come to shape our culture. What’s interesting about this competition is that not only is the focus on the algorithms but the competitors are also forced to understand the human culture and why people behave the way they do. For recommender systems it’s important to understand the underlying meaning of previous user ratings, in order to suggest items of interest in the future. Another important thing is that algorithms or other tools might reveal something about our culture. In the winning contribution they use a mathematical tool called Singular Value Decomposition. This has resulted in good performance but the interpretation of why it does so is in some ways a mystery. Algorithms are part of our culture as well as potential shapers of it, be it through recommendations or other applications.

  1. Briefly explain to a first year university student what theory is, and what theory is not.
First of all, I think there’s no simple explanation of what theory is. It’s a word for which it’s hard to find consensus between people about the meaning of it. Even though it’s an abstract object, people do probably have opinons about its meaning. Before reading the texts I would have tried to explain that theory is a way of describing how something in the world works. I would also say that theory can exist on different levels. There could be a theory for an entire field of research and also for a subtopic in that field. Theory has multiple forms and it differs from field to field. I also subscribe to the notion of theory presented in the texts. which is about answering questions of the type why and also to make predictions about what things might lead to. It’s also important to mention which they do in the paper, that theory is not data or empirical results, but analytical thought of why the data turned out they way they did or thoughts about what this might result in. Theory is about humans trying to understand the world from a human perspective, how we interpret it.

  1. Describe the major theory or theories that are used in your selected paper. Which theory type (see Table 2 in Gregor) can the theory or theories be characterized as?
The article is a qualitative analysis of the impact of algortihms on culture. The major theory used is an explanatory one. The authors do make some minor predictive attempts but quite limited and I would argue that they’re not enough to classify it as a predictive theory. I would say that this should be, by the taxonomy established by Gregor, classified under the second category Explanation. It’s a too detailed explaination for it to be classified under the Analysis category, and the predictions lacks in precision for it to be categorized under the EP category. Furthermore, the predictions are not really testable which is a criteria for the EP category.
  1. Which are the benefits and limitations of using the selected theory or theories?
Benefits are that the reader will get a chance to make predictions about the future. The authors just provide the backdrop on which the predictions can be based. This could as well be a limitation if it’s hard to draw any conclusions or generalize. What is then the purpose of the explanation if you can’t use it?
Another limitation with this theory is that explanations aren’t necessarily objective. Even if the explanations are based on recorded data the interpretations could be skewed or wrong (is the theory strong enough?).