This week I chose the 3 examples from each subcategories of tasks. In picking the chosen tasks, I tried to get a variety that covered all parts of criteria for the subcategories. After choosing the tasks, I used author tags, and tags we generated as a group to determine oracle (positive) tags. I also came up with the distractor (negative) tags. After I did this I continued, and finished the powerpoint presentation of our tasks. Afterward, I was able to reformat these into google form questions. We are implementing the tasks this way in order to have a multiple answer setup since tobi studio does not support this functionality. I will now being implementing the final tasks in their entirety into tobi studio format.
I was out of town this week so I was not present for our weekly meeting. I reviewed the "Stack Exchange Tagger" on my own, as well as the generated write-up by Alyssa. Like us, this study is reviewing stack exchange auto-tagging/prediction methods. They use title and text to come up with tags and used support vector classification. An important outcome that should be considered in our own study is that user information is important to accuracy of tagging. I added this to our website and made a few other updates to reflect our latest research.
No comments:
Post a Comment