Wednesday, October 12, 2016

Week 6: 10/4/2016 - 10/11/2016

This week I continued working on experiment design, summary, and prototype. As we assign tags to our selected tasks I am currently in the process of defining the criteria for classification of tasks. When we deliver tasks to our subjects, we want to define them as being simple, average or complex, so the concrete reasoning for this categorization is valuable to our study. While I mentioned last week that the two categories of average & complex would be combined, we decided to separate these, again, by establishing concrete criteria for each category. I do believe, however, certain questions may not be as hard for users based on subject background. This is possibly a good point to look into after we have collected our data and begin to analyze, possibly in combination with recorded user confidence. This week we also analyzed a paper, presented by Alyssa, titled 'Towards Predicting the Best Answers in Community Based Question-Answering Services'. This study actually used questions pulled from Stack Overflow, so although their objective differed, it contained elements that related to our own project. The results here included which answers would be best based on their posting date in relation to the posting date of the initial question as well as how the amount of details and length of description in an answer related to accuracy.

No comments:

Post a Comment