I have concluded my contribution to the study in regards to how developers read & comprehend stack overflow questions for tag assignment. I went in with the objectives of figuring out where developers focus most and what were the most valuable areas of interest (AOIs). What I discovered is that often time, fixation count and fixation duration distributions correlated among AOIs I defined. I found that when questions get more complex, participants (especially those with more experience programming) spend more time on code and less time on title. I found that overall those with more experience use the code more and more to assign tags while those with less experience might rely on plain text such as title and description to assign tags. Keywords were an important feature in the questions for users as they fixated on them early and often revisited. I hope this will be useful in creating a weighting system for tag prediction as my team moves on to that. My complete study presentation and write up can be found on the Research page of our website.
Tomorrow is graduation for me. I had a really great time working on this project and working in the field of eye-tracking was very interesting for me. I am excited to move into my career and take all this valuable knowledge with me.
Saturday, December 17, 2016
Week 15: 12/06/2016 - 12/13/2016
This past week has been really hectic with finals and graduate school applications. I worked on and completed my end of the year report for the CREU program. I also refreshed my memory on Spark Machine Learning, through a tutorial Dr. Lazar suggested we try. I looked over my notes from my summer internship where I learned how to use Spark Machine Learning for the first time. The Spark Programming Guide offers a more comprehensive tutorial on Spark ML, which I spent time looking over. It is easy to get up and running very quickly with Spark and it is comprehensive in the machine learning algorithms that are available. Spark will be an excellent tool for our data analytics phase.
I also finished seven graduate school applications this week! I am looking forward to getting feedback in late February, early March.
I also finished seven graduate school applications this week! I am looking forward to getting feedback in late February, early March.
Tuesday, December 6, 2016
Week 14: 11/29/2016 - 12/06/2016
This week I attended the MLH Local Hack Day at YSU on Saturday. Jenna gave an interactive talk on setting up Spark, along with a Scala code tutorial of some simple loops and functions. I was a little intimidated by the syntax, but I think over time I will adjust to the language. I viewed her powerpoint presentation on running Tweet data through Spark, but we are meeting up on Friday to continue the learning process. We now have 6 participants in our study and will soon be moving on to the machine learning phase of our project. I am eager to continue moving forward with this phase as I continue to learn more in my field.
Week 14: 11/29/2016 - 12/06/2016
This week I concluded the data collection phase. I was able to capture a varying range of people in regards to C/C++ experience. All the participants were YSU students, the majors that participated were Computer Science and Electrical Engineering. While I was hoping to get other majors from the CSIS department to compare gaze-data I think having 2 majors will be enough to compare. The process went smoothly, I was able to keep all the collected gaze-data and I also learned a lot from even moderating. After just visually analyzing (i.e. looking over gaze-data representations, no tools) I can already determine a few trends. For example, those who have less experience with C/C++ use the title and question text more to assign tags, especially for the more complicated tasks, versus using the code. It also seems that those with more C/C++ experience were better able to assign tags that apply more to the question solution versus obvious things found directly in the text/code, this was expected. I plan to incorporate simple observations like this, as I think they are useful in interpreting how tags were selected. In this upcoming week I will do the following:
1. Analyze data as a whole - use Tobii to look into fixation count, duration count, and time to first fixation. I hope to compare how people considered oracle (positive) tags vs the distractors (negative) in coming to their tag selections.
2. Compare data from different levels of experience - consider how people came to the correct/incorrect conclusions based on their experience levels and try to determine common trends.
Furthermore, I want to use the gathered data to determine keywords that should award higher weights to suggested tags. I think this is something that will be helpful, especially in the future when applying the machine learning algorithms.
1. Analyze data as a whole - use Tobii to look into fixation count, duration count, and time to first fixation. I hope to compare how people considered oracle (positive) tags vs the distractors (negative) in coming to their tag selections.
2. Compare data from different levels of experience - consider how people came to the correct/incorrect conclusions based on their experience levels and try to determine common trends.
Furthermore, I want to use the gathered data to determine keywords that should award higher weights to suggested tags. I think this is something that will be helpful, especially in the future when applying the machine learning algorithms.
Week 14: 11/29/2016 - 12/06/2016
This past week I prepared and presented an introductory tutorial on Scala and Apache Spark for YSU's Local Hack Day. I demonstrated how to set up a Scala and Spark program using Maven in IntelliJ. I explained basic programming concepts in Scala, such as data types, classes, objects, functions, anonymous functions. I also explained basic programming concepts of the Apache Spark Streaming API, such as SparkContexts and StreamingContexts. All of the concepts discussed in my workshop can be found in a PowerPoint presentation on the LHD Workshop tab of my website: http://jlwise.people.ysu.edu/ . This work is important to our CREU project, because we will be using Scala and Spark ML to analyze the data Ali collected on software developers viewing Stack Overflow questions. I also learned how to install Scala and Spark in IntelliJ without the need to download the Scala and Spark libraries directly.
Here is a photo of me presenting at Local Hack Day:
Here is a photo of me presenting at Local Hack Day:
Friday, December 2, 2016
Week 13: 11/22/2016 - 11/29/2016
This past week I read and summarized the paper, Multi-Label Classification: An Overview, as a PowerPoint presentation. The paper explains 14 different multi-label classification algorithms (although many of the algorithms overlap or are variations of each other). This paper also experimentally compares three of the algorithms using Hamming Loss and accuracy metrics. It determined that the transformation algorithms PT3 and PT4 predicts the best with accuracy and Hamming Loss metrics respectively. I learned what multi-label classification is compared to single-label classification. Multi-label classification assigns multiple prediction labels to a single data sample instead of a single prediction label to a single data sample. Multi-label classification makes the most sense for the Stack Overflow tag prediction analysis we will be performing next semester, because we want to label a question with multiple tags ("prediction labels").
Wednesday, November 30, 2016
Week 13: 11/22/2016 - 11/29/2016
This week I listened to Jenna's presentation of 'Multi-Label Classification: An Overview'. It was an evaluation of different classification methods to compute the best accuracy of each multi-label by using problem transformation methods. Three problem transformation methods were implemented in conjunction with the algorithms kNN, Naive Bayes and an addition of Support Vector Machine (SMO). The dataset focused on Genbase and Scene, and the best results were given when each set of labels was considered a single label and used the SMO algorithm. This performance achieved the highest mean accuracy for all of the learning algorithms used within each data set.
On Saturday, Jenna will be giving a talk on Spark and data analysis with Tweet data. There, I will gain further insight on Apache Spark and learning Scala. I've been experimenting with test data on my machine, and I hope to gain more experience with machine learning algorithms.
On Saturday, Jenna will be giving a talk on Spark and data analysis with Tweet data. There, I will gain further insight on Apache Spark and learning Scala. I've been experimenting with test data on my machine, and I hope to gain more experience with machine learning algorithms.
Week 13: 11/22/2016 - 11/29/2016
This week we switched eye-tracking labs. I now have access to a different lab and a different machine. I was able to import my project there. This is where I will conduct my study in the upcoming week (I hope to get at least 5 participants to present data for my capstone). I configured the project to feature AOIs, which will be helpful when analyzing the data. I also worked on some documents that go along with my study. I finalized the pre and post survey, this contains questions about personal background and details a participants experience with C/C++ and Stack Overflow. I think having this data will be interesting to compare to how they read and assign tags. I also updated the one page study to add new information and update any changes that have been made since I first wrote the summary (i.e. a sample screen to get users familiar with the layout they will be reading from).
Week 12: 11/15/2016 - 11/22/2016
For Week 12 I continued my work in the lab with our project. I noticed errors in the slide screen captures that had to be addressed. Since the images were changed, the current test data I had had to be deleted (since it would not be complete or correspond to the corrected screen).
For our ACM-W chapter, Alyssa and I did class visits to courses that contain first year students. We hoped to get some new recruits in as our chapter will lose some in the coming months with people graduating. We also talked about Penguin Hackers and ACM.
For our ACM-W chapter, Alyssa and I did class visits to courses that contain first year students. We hoped to get some new recruits in as our chapter will lose some in the coming months with people graduating. We also talked about Penguin Hackers and ACM.
Sunday, November 27, 2016
Week 12: 11/15/2016 - 11/22/2016
This past week I traveled to San Francisco, CA for an onsite interview with Pure Storage, visited Carnegie Mellon University (CMU) for a graduate school visit, and reviewed Alyssa's write-up of the paper, Predicting Tags for Stack Overflow Questions.
My onsite interview with Pure Storage lasted three hours. I received a tour of their office, had two technical interviews, and one behavioral interview. The Pure Storage office was very inviting. It had open desks and discussion rooms, white boards, two kitchens, and a game room. The first technical interview was difficult, because I wasn't used to the type of question I was asked. The second technical interview went really well. I was able to come up with two different solutions to the problem and discuss the pros and cons of using both. The behavioral interview also went well. I asked about the different types of projects available for me to work on as an intern and how my skills would fit those projects. Unfortunately, Pure Storage doesn't want to more forward with me as an intern for the summer, but the onsite interview was a great learning experience as this was my first one.
The highlight of my week was my graduate school visit to CMU. I spoke with four different faculty members and three different graduate students about their research in software engineering, programming languages, and machine learning. I also learned about the graduate student culture and about helpful tips for my application. The research I found the most interesting after all of my discussions was research to turn the written English language into code automatically. It encompasses all of my interests in machine learning and software engineering. After my visit, I can confirm that CMU is my top choice for graduate school.
My onsite interview with Pure Storage lasted three hours. I received a tour of their office, had two technical interviews, and one behavioral interview. The Pure Storage office was very inviting. It had open desks and discussion rooms, white boards, two kitchens, and a game room. The first technical interview was difficult, because I wasn't used to the type of question I was asked. The second technical interview went really well. I was able to come up with two different solutions to the problem and discuss the pros and cons of using both. The behavioral interview also went well. I asked about the different types of projects available for me to work on as an intern and how my skills would fit those projects. Unfortunately, Pure Storage doesn't want to more forward with me as an intern for the summer, but the onsite interview was a great learning experience as this was my first one.
The highlight of my week was my graduate school visit to CMU. I spoke with four different faculty members and three different graduate students about their research in software engineering, programming languages, and machine learning. I also learned about the graduate student culture and about helpful tips for my application. The research I found the most interesting after all of my discussions was research to turn the written English language into code automatically. It encompasses all of my interests in machine learning and software engineering. After my visit, I can confirm that CMU is my top choice for graduate school.
Wednesday, November 23, 2016
Week 12: 11/15/2016 - 11/22/2016
This week I read and summarized 'Predicting Stack Overflow Tags', part of the Kaggle competition, and installed Apache Spark on my machine. Ali and I reached out to the Programming and Problem Solving 2610 class to promote the women's ACM group on our campus. We feel it's important to get involved in organizations designed to engage women in information technology and computing fields.
Thursday, November 17, 2016
Week 11: 11/8/2016 - 11/15/2016
This past week was pretty busy! I prepared for and had a Facebook phone interview on Friday, 11/11/16. I think it went pretty well. I was able to write almost a full working solution to the problem that was posed and to analyze the run-time. I spoke with an engineer who works on the back-end of the search feature of the Facebook website. His work involves combining different search result features into one system that searches for multiple different types of information in one search. It also tries to figure out which information would be most important to the searcher. Unfortunately, I was not chosen to move on to the Host Matching for Google, but I enjoyed the interview process.
I also studied for and took the GRE for a second time on Tuesday, 11/15/16. I did a little bit better in quantitative analysis than I did before, but I went down by 1 point in verbal reasoning. Overall my scores from the past two times I took it (including this time) puts me in the 70th percentile in verbal and the 76th percentile in quantitative. Ideally I would like to do better, but overall I am happy with my scores.
Finally, I made minor edits to and submitted our poster presentation proposal to OCWiC 2017 and reviewed the paper write-up that Alyssa did for the paper, Tag Recommendation in Software Information Sites.
I also studied for and took the GRE for a second time on Tuesday, 11/15/16. I did a little bit better in quantitative analysis than I did before, but I went down by 1 point in verbal reasoning. Overall my scores from the past two times I took it (including this time) puts me in the 70th percentile in verbal and the 76th percentile in quantitative. Ideally I would like to do better, but overall I am happy with my scores.
Finally, I made minor edits to and submitted our poster presentation proposal to OCWiC 2017 and reviewed the paper write-up that Alyssa did for the paper, Tag Recommendation in Software Information Sites.
Wednesday, November 16, 2016
Week 11: 11/8/2016 - 11/15/2016
This week I got to do some hands on work with the project. I was able to get into the eye-tracking lab and do some testing with the Tobii studio project on the machine we will be running experiment on. Since I created and configured the project on a smaller screen, viewing in the lab uncovered some room for improvement. So I did some work on creating visuals that were tailored to the screen size we would be using. After making some changes I was able to make our task samples larger and clearer, we hope this will help in the preciseness of the data we gather in the near future. Alyssa and I ran some tests and were able to see how some real data comes out from this software. Since we are about a month out from the end of the semester I plan on coming up with my capstone presentation soon. I hope to get a larger data set from participants in the next week or so to have some solid examples to present for this.
Tuesday, November 15, 2016
Week 11: 11/8/2016 - 11/15/2016
This week I read 'Tag Recommendations in Software Information Systems' and presented the topic to our group. A tag prediction system, Tag Combine, collects tags from CQA sites like Stack Overflow or Freecode and removes 'stop words' from each block of text. A multi-label learning algorithm is then used to build a multi-label classifier and give ranking scores of each tag based on similarity, tag terms and multi-labels. The results of Tag Combine outperformed earlier methods like Tag Rec by 22.65%, and the authors hope to achieve a higher percentage in the future.
Next week, I'll continue to read about the successes of tag prediction systems and download Apache to my machine to start analyzing machine learning algorithms in preparation for the analysis of our gaze data.
Next week, I'll continue to read about the successes of tag prediction systems and download Apache to my machine to start analyzing machine learning algorithms in preparation for the analysis of our gaze data.
Wednesday, November 9, 2016
Week 10: 11/1/2016 - 11/8/2016
This week we continued putting the finishing touches on our OCWIC submission. I joined Jenna & Dr. Sharif in working on the poster and it was interesting to see more in depth how LaTeX worked, as it was always something I wanted to know how to use. Additionally, Alyssa presented an abstract, along with a one-page write-up on our project. We discussed as a group and made some changes to formatting and content. I plan on this week continuing through with our experiment. As my Tobi project is done I will be working in the eye-tracking lab running some tests and gathering some data.
Tuesday, November 8, 2016
Week 10: 11/1/2016 - 11/8/2016
This past week I finished our poster presentation's abstract and proposal for the 2017 Ohio Celebration of Women in Computing Conference. I will be presenting the expertise prediction work that we performed during the CREU 2014-2015 year. I learned how to modify .cls files in LaTeX to improve the formatting of the LaTeX proposal. I also learned how to find the most important details of our research in order to cut the proposal to one page. I look forward to submitting the proposal and abstract this week.
I also had two technical phone interviews with Google last Wednesday, 11/2/2016. I think they went well. I was able to come up with pseudo code or an algorithm for each problem I was posed. I also was able to analyze the runtimes of the algorithms and suggest improvements to the algorithms to optimize them. I should find out if I am moving forward in their interview process this week. Either way, it was a good learning experience and test of my technical skills.
I also had two technical phone interviews with Google last Wednesday, 11/2/2016. I think they went well. I was able to come up with pseudo code or an algorithm for each problem I was posed. I also was able to analyze the runtimes of the algorithms and suggest improvements to the algorithms to optimize them. I should find out if I am moving forward in their interview process this week. Either way, it was a good learning experience and test of my technical skills.
Monday, November 7, 2016
Week 10: 11/1/2016 - 11/8/2016
This week I finished the one page write-up of our project to submit to OCWIC, Ohio Celebration of Women in Computing. I also added to the list of features I've been collecting from each study our group presents in order to gain a better understanding of training data sets and using machine learning algorithms once we are ready to conduct our study.
I also volunteered for the East Central NA Regional Programming Contest last Friday with Ali and other CSIS students at YSU. It was an interesting experience and I was amazed at how fast each team was able to solve each problem given in the short time span.
I also volunteered for the East Central NA Regional Programming Contest last Friday with Ali and other CSIS students at YSU. It was an interesting experience and I was amazed at how fast each team was able to solve each problem given in the short time span.
Thursday, November 3, 2016
Week 9: 10/25/2016 - 11/1/2016
This week I presented the paper Using Eye Tracking to Investigate Reading Patterns and Learning Styles of Software Requirement Inspectors to Enhance Inspection Team Outcome. This was a research study done on inspectors during software life cycle; particularly inspectors of the requirements stage of a life cycle. This research focused in on how someone's background (particularly their learning styles) would affect their efficiency and effectiveness when searching for faults that will have negative effects on software later. We considered how learning styles might affect someone's ability to review code snippets and how we could apply their findings to our study. Another interesting concept from this paper was how they took data from the inspector's eye tracking and created the what they called virtual teams. They used average values and patterns to determine what groups of learning styles would work best together find faults. The outcome of this study concluded that the inspectors who had reading patterns that was in order, beginning to end, weren't very efficient. The best learning style was "sequential", as defined by "Felder Silverman Learning Style Model".
Also this week, I finished up and submitted the application to get funding for our ACM-W chapter. This would significantly help jump start our chapter and allow us to do some activities we have planned this school year. Our chapter is also working on recruitment in order to offset the loss of some of our group graduating this year. Finally, I took some time out this weekend to help moderate the hackathon our university sponsored. This was a fun event where I got a first-hand look at what takes place during these contests.
Also this week, I finished up and submitted the application to get funding for our ACM-W chapter. This would significantly help jump start our chapter and allow us to do some activities we have planned this school year. Our chapter is also working on recruitment in order to offset the loss of some of our group graduating this year. Finally, I took some time out this weekend to help moderate the hackathon our university sponsored. This was a fun event where I got a first-hand look at what takes place during these contests.
Tuesday, November 1, 2016
Week 9: 10/25/2016 - 11/1/2016
This week, I prepared the abstract for our tag prediction study that we will submit to OCWIC on November 22nd. I also read 'Using Eye Tracking to Investigate Reading Patterns and Learning Styles of Software Requirement Inspectors to Enhance Inspection Team Outcome' to continue learning about how we can hone our analysis techniques. After reviewing the feature information of every study we have analyzed so far, it's safe to say that the user information along with history has more of an impact on the accuracy of results than one would think.
Week 9: 10/25/2016 - 11/1/2016
This past week I worked on preparing our submission material for OCWiC 2017. We intend to present the poster that I presented at Tapia two months ago from the CREU 2015-2016 year. We also intend to present a talk about the research we are working on this year in CREU (Alyssa is preparing that submission material).
I also participated in the ACM ICPC Regional programming competition this past Saturday. My team and I completed 1 problem and with a little more time we think we would have completed 2 problems. It was a fun experience that was helpful preparation for my Google and Twitter interviews.
I have a Google technical phone interview tomorrow evening that I spent the last 5 days preparing for. I reviewed data structures and algorithms including run-times and space complexities. I also practiced coding problems from the Crack the Coding Interview book and from HackerRank.
I am also working on my online technical coding challenge for Twitter that is going well. I will be scheduling a technical phone interview with Facebook soon. It seems that my trip to GHC was very fruitful!
I also participated in the ACM ICPC Regional programming competition this past Saturday. My team and I completed 1 problem and with a little more time we think we would have completed 2 problems. It was a fun experience that was helpful preparation for my Google and Twitter interviews.
I have a Google technical phone interview tomorrow evening that I spent the last 5 days preparing for. I reviewed data structures and algorithms including run-times and space complexities. I also practiced coding problems from the Crack the Coding Interview book and from HackerRank.
I am also working on my online technical coding challenge for Twitter that is going well. I will be scheduling a technical phone interview with Facebook soon. It seems that my trip to GHC was very fruitful!
Saturday, October 29, 2016
Week 8: 10/18/2016 - 10/25/2016
This past week I was at the 2016 Grace Hopper Celebration of Women in Computing in Houston, TX. I had a great time networking with companies and graduate schools. I also saw some great talks.
At the career fair I visited many graduate school booth, because I am interested in pursuing a Ph.D. in CS. The recruiter from UC Berkeley that I met at Tapia was there and I was able to give her a few faculty names that she will contact on my behalf. My information will also be passed along to the MIT CS Department (their masters programs were there instead). Purdue wants me to send them a few faculty names, and I had a good experience visiting the University of Illinois's booth.
I am also looking for a summer internship, so I spent some time visiting different companies. I got an interview with Pure Storage after doing their coding challenge. It went so well that they recently invited me to an onsite interview in Mountain View, CA. I also received emails from Twitter and Facebook to start their interview process. Unrelated to GHC, but I have a Google technical interview schedule for Wednesday of next week! I am looking forward to seeing how these opportunities pan out.
I went to some great talks about Machine Learning in production, getting students involved in open source projects, and about conflict resolution. One of the RedHat engineers invited me to the open source panel and the conflict resolution talk. The names of each of the talks are as follows respectively: 7 Hidden Gems: Building Successful Machine-Learning Products, Open Source belongs in your class. Where do you start?, and Constructive Conflict Resolution: or how to make lemonade out of lemons. I really enjoyed the talk about getting students involved in open source projects, because it will be useful information for when I am a faculty member at a university.
I also visited the ACM-W booth and CRA-W booth!
Here are some pictures from the conference:
At the career fair I visited many graduate school booth, because I am interested in pursuing a Ph.D. in CS. The recruiter from UC Berkeley that I met at Tapia was there and I was able to give her a few faculty names that she will contact on my behalf. My information will also be passed along to the MIT CS Department (their masters programs were there instead). Purdue wants me to send them a few faculty names, and I had a good experience visiting the University of Illinois's booth.
I am also looking for a summer internship, so I spent some time visiting different companies. I got an interview with Pure Storage after doing their coding challenge. It went so well that they recently invited me to an onsite interview in Mountain View, CA. I also received emails from Twitter and Facebook to start their interview process. Unrelated to GHC, but I have a Google technical interview schedule for Wednesday of next week! I am looking forward to seeing how these opportunities pan out.
I went to some great talks about Machine Learning in production, getting students involved in open source projects, and about conflict resolution. One of the RedHat engineers invited me to the open source panel and the conflict resolution talk. The names of each of the talks are as follows respectively: 7 Hidden Gems: Building Successful Machine-Learning Products, Open Source belongs in your class. Where do you start?, and Constructive Conflict Resolution: or how to make lemonade out of lemons. I really enjoyed the talk about getting students involved in open source projects, because it will be useful information for when I am a faculty member at a university.
I also visited the ACM-W booth and CRA-W booth!
Here are some pictures from the conference:
Wednesday, October 26, 2016
Week 8: 10/18/2016 - 10/25/2016
This week I chose the 3 examples from each subcategories of tasks. In picking the chosen tasks, I tried to get a variety that covered all parts of criteria for the subcategories. After choosing the tasks, I used author tags, and tags we generated as a group to determine oracle (positive) tags. I also came up with the distractor (negative) tags. After I did this I continued, and finished the powerpoint presentation of our tasks. Afterward, I was able to reformat these into google form questions. We are implementing the tasks this way in order to have a multiple answer setup since tobi studio does not support this functionality. I will now being implementing the final tasks in their entirety into tobi studio format.
I was out of town this week so I was not present for our weekly meeting. I reviewed the "Stack Exchange Tagger" on my own, as well as the generated write-up by Alyssa. Like us, this study is reviewing stack exchange auto-tagging/prediction methods. They use title and text to come up with tags and used support vector classification. An important outcome that should be considered in our own study is that user information is important to accuracy of tagging. I added this to our website and made a few other updates to reflect our latest research.
I was out of town this week so I was not present for our weekly meeting. I reviewed the "Stack Exchange Tagger" on my own, as well as the generated write-up by Alyssa. Like us, this study is reviewing stack exchange auto-tagging/prediction methods. They use title and text to come up with tags and used support vector classification. An important outcome that should be considered in our own study is that user information is important to accuracy of tagging. I added this to our website and made a few other updates to reflect our latest research.
Tuesday, October 25, 2016
Week 8: 10/18/2016 - 10/25/2016
This week I presented 'Stack Exchange Tagger', a student research article that focused on predicting tags using a classifier, which was part of a more general problem of developing accurate classifiers for large scale text datasets. The students took 10,000 questions from StackOverflow and analyzed the text using a Linear Support Vector Classification. The results showed that the linear SVC performed better than all other kernel functions, and the best accuracy obtained from the analysis was 54.75%. The students were able to conclude that the result of the accuracy could have been better if user information had been considered.
Next week, I will be developing the abstract of our project and we will be closer to submitting to OCWIC, 2017.
Next week, I will be developing the abstract of our project and we will be closer to submitting to OCWIC, 2017.
Tuesday, October 18, 2016
Week 7: 10/11/2016 - 10/18/2016
This week I read 'Predicting Tags for StackOverflow Posts' and presented the material at our meeting. The results showed that the tag prediction classifier was 65% accurate on an average of 1 tag per post. The study was designed in mind to help improve user experience on StackOverflow by trying to gather a collection of tags for users looking for a specific solutions to programming problems, and the possible implementation of a tag clean-up system. Next week, I will be continuing to explore possible methods to build our prediction model after we collect our gaze data.
Week 7: 10/11/2016 - 10/18/2016
This past week I prepared for and participated in a press conference at YSU for Dr. Sharif's NSF CAREER award. She and I spoke about the eye-tracking research we are doing, which includes the work we do with CREU. I spoke about how big of an impact this research has had on my future academic career. It was an honor to participate in the celebration of Dr. Sharif's achievements, as she has been an excellent mentor and friend.
Here is a publicity video from YSU showing portions of the press conference:
I also took the GRE on Saturday, October 15, 2016, so I spent most of my week preparing for that. My unofficial scores were respectable, but should be a little bit higher. I am planning on taking the GRE again on Tuesday, November 15, 2016. I am excited to be one step closer to finishing my graduate school applications!
I am currently attending the 2016 Grace Hopper Celebration of Women in Computing in Houston, Texas. I am looking forward to a great time and will have pictures to come!
Here is a publicity video from YSU showing portions of the press conference:
I also took the GRE on Saturday, October 15, 2016, so I spent most of my week preparing for that. My unofficial scores were respectable, but should be a little bit higher. I am planning on taking the GRE again on Tuesday, November 15, 2016. I am excited to be one step closer to finishing my graduate school applications!
I am currently attending the 2016 Grace Hopper Celebration of Women in Computing in Houston, Texas. I am looking forward to a great time and will have pictures to come!
Monday, October 17, 2016
Week 7: 10/11/2016 - 10/18/2016
Continuing through this week with task and experiment creation, we are finishing up our tag assignment of tasks. I have re-categorized/added to the task list according to the criteria I specified. The criteria is as follows:
- Simple tasks will include content that will be found in CS1 classes; common knowledge of topics such as simple data types, operators, control structures, basic properties of C++ language.
- Average tasks will include knowledge that is common to someone beyond the CS1 level and gained through experience with programming; specific details of data structures, more involved application of aspects from the simple level.
- Complex tasks will include applications of more difficult or compound topics; algorithm designs, bit manipulation, using pointers, obscure/intense properties of the C++ language.
Wednesday, October 12, 2016
Week 6: 10/4/2016 - 10/11/2016
This week I continued working on experiment design, summary, and prototype. As we assign tags to our selected tasks I am currently in the process of defining the criteria for classification of tasks. When we deliver tasks to our subjects, we want to define them as being simple, average or complex, so the concrete reasoning for this categorization is valuable to our study. While I mentioned last week that the two categories of average & complex would be combined, we decided to separate these, again, by establishing concrete criteria for each category. I do believe, however, certain questions may not be as hard for users based on subject background. This is possibly a good point to look into after we have collected our data and begin to analyze, possibly in combination with recorded user confidence. This week we also analyzed a paper, presented by Alyssa, titled 'Towards Predicting the Best Answers in Community Based Question-Answering Services'. This study actually used questions pulled from Stack Overflow, so although their objective differed, it contained elements that related to our own project. The results here included which answers would be best based on their posting date in relation to the posting date of the initial question as well as how the amount of details and length of description in an answer related to accuracy.
Tuesday, October 11, 2016
Week 6: 10/4/2016 - 10/11/2016
This week, I read the article 'Towards Predicting the Best Answers in Community Based Question-Answering Services' and presented the material to our group. The study analyzed a large dataset from StackOverflow based on answer content for a random sample of questions being asked before August of 2012. It was looking to predict if an answer may be selected as the best based on classifier learning from labeled data. The results showed that the more answers from users on StackOverflow an original answer has, the less likely it is to be selected as the best, and that what qualifies a best answer is one with more details and a clear explanation of a solution. Typically, answers that provide an in depth solution would rank the best, and sure enough, users who were not the first to answer the posed question ranked 70.71% accurate.
I also read through our task list that Ali created and came up with tags that I thought worked best with the questions being asked on StackOverflow. Once our tasks are finalized, we will be using our empirical studies lab to gather participants and ask them to predict tags of their own, or choose from our list of five.
I also read through our task list that Ali created and came up with tags that I thought worked best with the questions being asked on StackOverflow. Once our tasks are finalized, we will be using our empirical studies lab to gather participants and ask them to predict tags of their own, or choose from our list of five.
Week 6: 10/4/2016 - 10/11/2016
This past week I came up with possible tags for each of the StackOverflow questions Ali created for the study we will be conducting, read the paper Towards Predicting the Best Answers in Community-Based Question-Answer Services, finished the second revision of the paper we submitted to YSU's Honors College Journal, and spoke at the Hyland and OCWiC Women in Tech Event about my research and my experiences at OCWiC in 2015.
On Saturday (10/8/2016) I spent the afternoon at Hyland in Westlake, Ohio talking about the eye-tracking research I do with the CREU program and my experience presenting that work at the 2015 Ohio Celebration of Women in Computing. I also highlighted the CREU program as something the women there should consider doing. A picture from the event can be seen to the right.
I also read the paper, Towards Predicting the Best Answers in Community-Based Question-Answer Services. It addressed the problem of predicting if an answer may be selected as the "best" answer, based on learning from labeled data. The contributions of this paper that will be interesting for our CREU work are the designed features that measure key aspects of an answer and the features that heavily influence whether an answer is the "best" answer. They concluded that the best answer is usually the one with more details and comments and the one that is most different than the others. We would like to predict the best answer on StackOverflow documents using eye-tracking features, so this is a beneficial paper to read towards that goal.
Finally, I finished the second revisions for the paper we submitted to the YSU Honors College Journal about the work we performed in the CREU 2015-2016 program. I also created tags I though were approriate for the StackOverflow questions we will be using for our study.
On Saturday (10/8/2016) I spent the afternoon at Hyland in Westlake, Ohio talking about the eye-tracking research I do with the CREU program and my experience presenting that work at the 2015 Ohio Celebration of Women in Computing. I also highlighted the CREU program as something the women there should consider doing. A picture from the event can be seen to the right.
I also read the paper, Towards Predicting the Best Answers in Community-Based Question-Answer Services. It addressed the problem of predicting if an answer may be selected as the "best" answer, based on learning from labeled data. The contributions of this paper that will be interesting for our CREU work are the designed features that measure key aspects of an answer and the features that heavily influence whether an answer is the "best" answer. They concluded that the best answer is usually the one with more details and comments and the one that is most different than the others. We would like to predict the best answer on StackOverflow documents using eye-tracking features, so this is a beneficial paper to read towards that goal.
Finally, I finished the second revisions for the paper we submitted to the YSU Honors College Journal about the work we performed in the CREU 2015-2016 program. I also created tags I though were approriate for the StackOverflow questions we will be using for our study.
Tuesday, October 4, 2016
Week 5: 9/27/2016 - 10/4/2016
After re-reading all of the articles we have analyzed since the start of our project, I created a summary (using a table) of features for each study in Microsoft Word. It displays the author name, date, dataset, a list of methods used, a list of features and the results of each finding. I then analyzed the sample train data that I had started sorting through last week and have made a list of tags that I find best suits the questions asked by users on StackOverflow. By analyzing the Kaggle dataset, I feel more confident in examining the questions our group has come up with to predict what tags would best fit each question. Once we reconvene and compare our tags, we will be that much closer to setting up our study and gathering participants.
I also read through the article: "Reading Without Words: Eye Movements in the Comprehension of Comic Strips", and listened to the presentation of its analysis from my group member, Jenna. In the study, participants were asked to view a single comic strip panel, one in order and one randomized, and then a set of 6 panels, both ordered and randomized. The study focused on gaze fixation of each participant and wanted to investigate how disrupting each person's view of reading a comic strip would affect their attention. The conclusion showed that skewing a person's way of viewing a comic strip slows down their comprehension of the narrative. I think analyzing these studies will help us when we are finalizing our experiment of tag prediction.
I also read through the article: "Reading Without Words: Eye Movements in the Comprehension of Comic Strips", and listened to the presentation of its analysis from my group member, Jenna. In the study, participants were asked to view a single comic strip panel, one in order and one randomized, and then a set of 6 panels, both ordered and randomized. The study focused on gaze fixation of each participant and wanted to investigate how disrupting each person's view of reading a comic strip would affect their attention. The conclusion showed that skewing a person's way of viewing a comic strip slows down their comprehension of the narrative. I think analyzing these studies will help us when we are finalizing our experiment of tag prediction.
Week 5: 9/27/2016 - 10/4/2016
This past week I reread and edited my summary of the paper, Reading Without Words: Eye Movements in the Comprehension of Comic Strips. The goal of the paper was to investigate how disrupting the visual sequence of a comic strip would affect attention. The authors of the paper used eye-tracking data to determine visual attention. They found that disrupting the visual sequence of a comic strip slows down the viewer and makes comprehending the narrative more difficult.
The highlights of the paper that are worth pursuing for our study are the related work by Cohn 2013b, 2013c, and McCloud 1993 that analyze the comprehension of words in sentences and the correlation maps used to compare fixation locations between experiments. The correlation maps are interesting, because they are used as in technique to determine how similar the visual attention is of two different participants on two different panels. For our study it might be useful to determine how similar the visual attention is of two different participants on two different StackOverflow questions. Analyzing the comprehension of words/code in the StackOverflow question would also be useful to our study.
The highlights of the paper that are worth pursuing for our study are the related work by Cohn 2013b, 2013c, and McCloud 1993 that analyze the comprehension of words in sentences and the correlation maps used to compare fixation locations between experiments. The correlation maps are interesting, because they are used as in technique to determine how similar the visual attention is of two different participants on two different panels. For our study it might be useful to determine how similar the visual attention is of two different participants on two different StackOverflow questions. Analyzing the comprehension of words/code in the StackOverflow question would also be useful to our study.
Week 5: 9/27/2016 - 10/4/2016
This week I finished up the list of tasks. We decided on using posts that refer to C++ code only (initially we had discussed both Java and C++) for simplicity. Categorized by difficulty, I found 10 simple tasks and 10 average-complex tasks that would be possibilities for our study. I decided to merge the 2 categories of average and complex since difficulty of tasks will vary between our subjects depending on personal experience. For our experiment design, subjects will need to determine the appropriate tags for a posting. So our next step is to come up with tags for each task in order to build lists of suggestions they may choose from.
Additionally this week we reviewed the paper "Reading Without Words: Eye Movements in the Comprehension of Comic Strips". This was an eye-gaze study using comic strips (animations only, no text) as visual stimuli. The researchers mixed the order of these comic strips and studied how this affects attention, which ultimately resulted in making the comic strips harder to understand which required more attention Correlation maps were then used to compare fixation locations in comic strips. Here, the researchers analyzed their data using t-tests studying variance between experiments (comparing the gaze-data in normal sequence of comics versus randomized sequence of comics).
Additionally this week we reviewed the paper "Reading Without Words: Eye Movements in the Comprehension of Comic Strips". This was an eye-gaze study using comic strips (animations only, no text) as visual stimuli. The researchers mixed the order of these comic strips and studied how this affects attention, which ultimately resulted in making the comic strips harder to understand which required more attention Correlation maps were then used to compare fixation locations in comic strips. Here, the researchers analyzed their data using t-tests studying variance between experiments (comparing the gaze-data in normal sequence of comics versus randomized sequence of comics).
Wednesday, September 28, 2016
Week 4: 9/20/2016 - 9/27/2016
This past week I read two papers, Gaze-tracked Crowdsourcing and Reading Without Words: Eye Movements in the Comprehension of Comic Strips.
The Gaze-tracked Crowdsourcing paper tries to determine whether the gaze-tracking of workers during word disambiguation task solving can disclose useful information that can improve the task output. The authors of this paper arrived at two interesting results: first, that the majority of thier participants read from the beginning to the end in order until they found a strong enough sense distinguishing word, and second, the rest of the participants preferred fast text skimming. It would be interesting to determine whether this behavior holds true when software developers view stack overflow documents when trying to determine the appropriate tag for the questions.
The other paper, Reading Without Words: Eye Movements in the Comprehension of Comic Strips, I am assigned to read and summarize for our 10/4/2016 meeting. I decided to intially read and create a PowerPoint presentation early, because the paper is longer and I know I will need to take another pass at it this week. I will explain my observations of this paper in my next blog post.
The Gaze-tracked Crowdsourcing paper tries to determine whether the gaze-tracking of workers during word disambiguation task solving can disclose useful information that can improve the task output. The authors of this paper arrived at two interesting results: first, that the majority of thier participants read from the beginning to the end in order until they found a strong enough sense distinguishing word, and second, the rest of the participants preferred fast text skimming. It would be interesting to determine whether this behavior holds true when software developers view stack overflow documents when trying to determine the appropriate tag for the questions.
The other paper, Reading Without Words: Eye Movements in the Comprehension of Comic Strips, I am assigned to read and summarize for our 10/4/2016 meeting. I decided to intially read and create a PowerPoint presentation early, because the paper is longer and I know I will need to take another pass at it this week. I will explain my observations of this paper in my next blog post.
trackiof workers during word disambiguation task solving
can disclose useful information that can improve the task
output.
Tuesday, September 27, 2016
Week 4: 9/20/2016 - 9/27/2016
This week, I read 'Gaze-Tracked Crowdsourcing' by Jakub Simko and Maria Bielikova. The information is pertinent to our study because we have been using gaze-tracking with iTrace, and it's helpful to gain a different perspective of study with this method. In the study, participants were asked to read through snippets of text and based on a word from the text, determine the sense of a word from a possible set of senses.
We have been assembling a task (consisting of questions from StackOverflow) similar to the task designed in the article, but instead of text, we will use snippets of code to get our participants to predict tags based on the content of the question sets.
I will be sifting through data sets provided by Kaggle this week to compare the tags assigned to each question from different users on the StackExchange network to gain a firmer understanding of tag prediction so that we can continue with our project.
We have been assembling a task (consisting of questions from StackOverflow) similar to the task designed in the article, but instead of text, we will use snippets of code to get our participants to predict tags based on the content of the question sets.
I will be sifting through data sets provided by Kaggle this week to compare the tags assigned to each question from different users on the StackExchange network to gain a firmer understanding of tag prediction so that we can continue with our project.
Week 4: 9/20/2016 - 9/27/2016
This week I finished up my review of the paper "Gaze-tracked Crowdsourcing". From the paper I realized the importance of gaze-tracking as an implicit feedback source that other methods cannot provide (i.e. cursor tracking, clicking, scrolling, etc). While the main take-away from the paper was using eye-tracking to identify sense-distinguishing words to enrich data sets, there was also some information about insight to user confidence. The study was able to detect confidence in user answers and even get some indication as to which answers were going to be correct or not, based upon reading behavior from gaze-data. We discussed our own study and decided that user confidence is something we would also like to record and analyze for our tasks. This week I also began on some experiment design. We came up with our main tasks, which involves having users pick tags based on various presented postings from Stack Overflow. We are going to use C++ questions that range from simple to complex to use for our tasks. In the coming week I plan to continue to build upon this list of options for questions so we can decide on our final pool of tasks for our experiment.
Tuesday, September 20, 2016
Week 3: 9/13/2016-9/20/2016
This past week I attended the 2016 Tapia Diversity Conference and had a great time!
I had interviews with Bloomberg L.P., Northrup Grumman, IBM, and BNY Mellon for summer internships. Northrup Grumman wants to move forward with an official offer and IBM will be contacting me in the next week. I also visited a lot of graduate school booths. The UC Berkeley recruiter requested that I send her three faculty I would be interested in working with, and I met the chair of the CS department at Brown when I won their swag raffle. I learned how to write a good graduate school application essay from the graduate school recruiters.
My poster presentation was very well received, and many of the poster session attendees seemed very interested in our work for CREU. I learned about other avenues for extending our research such as investigating why we can predict developer expertise. Being able to predict developer expertise well implies that there are underlying differences among the eye gaze features of expert and novice software developers. What are those differences and can we utilize them for teaching techniques/strategy heuristics?
Finally, I read the paper titled, Predicting Closed Questions on StackOverflow. The goal of this paper is to build a classifier that predicts whether or not a question will be closed given the question as submitted, along with the reason that the question was closed. The features used in this paper will be helpful in determining the features to use in our analyses.
Here are some pictures from Tapia:
I had interviews with Bloomberg L.P., Northrup Grumman, IBM, and BNY Mellon for summer internships. Northrup Grumman wants to move forward with an official offer and IBM will be contacting me in the next week. I also visited a lot of graduate school booths. The UC Berkeley recruiter requested that I send her three faculty I would be interested in working with, and I met the chair of the CS department at Brown when I won their swag raffle. I learned how to write a good graduate school application essay from the graduate school recruiters.
My poster presentation was very well received, and many of the poster session attendees seemed very interested in our work for CREU. I learned about other avenues for extending our research such as investigating why we can predict developer expertise. Being able to predict developer expertise well implies that there are underlying differences among the eye gaze features of expert and novice software developers. What are those differences and can we utilize them for teaching techniques/strategy heuristics?
Finally, I read the paper titled, Predicting Closed Questions on StackOverflow. The goal of this paper is to build a classifier that predicts whether or not a question will be closed given the question as submitted, along with the reason that the question was closed. The features used in this paper will be helpful in determining the features to use in our analyses.
Here are some pictures from Tapia:
Week 3: 9/13/2016- 9/20/2016
This week, I analyzed and presented a dataset from a group of researchers in a contest sponsored by Kaggle. I read through Galina Lezina and Artem Kuznetsov's published work, 'Predict Closed Questions on StackOverflow.' Their task involved examining public and private datasets from a group of user, post and tag features in StackOverflow and building a classifier that would predict whether or not a question will be closed based on a reason for a question being closed through a user vote. They had hoped their research paper would ease the task of moderating these posts on the StackExchange server through automation.
The results indicated that for the method used (an algorithm called vowpal wabbit), user interaction features worsened the outcome for a small value. Text features contributed more to this result. They also noted that some questions' status is open but should be closed in reality.
I think that even though the baseline models for data provided by Kaggle excluded actual content of each post, each user that examines a question on StackOverflow relies on that content for context if he or she can't understand what is being asked. As the results of the study showed, text features were informative and this holds true in reality. How can one answer a question fully without some reference as to what is being asked on StackOverflow?
As a StackOverflow user, I hope that we are able to predict these outcomes soon with these kind of classifiers.
On Sunday, I attended the Silly Science Sunday event sponsored by OH WOW! in Youngstown. I helped Dr. Sharif and Ali gather participants to demo our eye tracking game as well as demonstrate a simple game that used code trace on small robots. We had a great turnout at the event, and as always, it's great to inspire young minds with the lighter side of our technology!
The results indicated that for the method used (an algorithm called vowpal wabbit), user interaction features worsened the outcome for a small value. Text features contributed more to this result. They also noted that some questions' status is open but should be closed in reality.
I think that even though the baseline models for data provided by Kaggle excluded actual content of each post, each user that examines a question on StackOverflow relies on that content for context if he or she can't understand what is being asked. As the results of the study showed, text features were informative and this holds true in reality. How can one answer a question fully without some reference as to what is being asked on StackOverflow?
As a StackOverflow user, I hope that we are able to predict these outcomes soon with these kind of classifiers.
On Sunday, I attended the Silly Science Sunday event sponsored by OH WOW! in Youngstown. I helped Dr. Sharif and Ali gather participants to demo our eye tracking game as well as demonstrate a simple game that used code trace on small robots. We had a great turnout at the event, and as always, it's great to inspire young minds with the lighter side of our technology!
Monday, September 19, 2016
Week 3: 9/13/2016 - 9/20/2016
This week I wrapped up my evaluation of the study on the research paper, Synthesizing Image Representations of Linguistic and Topological Features for Predicting Areas of Attention. The conclusions we reached were that there seemed to be a more accurate area prediction when a subject was given more time to read the document or if the subject was presented with objectives/tasks before reading. We found that there was very little accurate information gathered on important features for comprehension during the tasks where subjects were given only seconds to read the the text and gather whatever information they could. Jenna also provided us with some information about what iTrace can provide in respect to Stack Overflow, which will be beneficial to our study. She found that pieces of the page are broken up into a certain number of parts, i.e. chunks of text, the title, the code excerpt, comments, votes, etc. iTrace will then return, based on these parts, what part of the page was focused on. So we can use these existing features when implementing our own study. We are also currently evaluating data from a competition that was held in the past on kaggle that we feel is relevant to our study. The competition was to predict the tags on across stack exchange sites (so the dataset includes technical and non-technical questions) only given the text and title of a posting.
Tuesday, September 13, 2016
Week 2: 9/6/16- 9/13/2016
This week, my contributions were to read a paper entitled 'Synthesizing Image Representations of Linguistic and Topological Features for Predicting Areas of Attention.' Ali presented the material at our weekly meeting, and I felt I gained a better grasp of the concept after her presentation. The study was designed to test students in our field on how they retain information after reading a document, and how they comprehend and remember reading material in a given time span. Subjects were asked to retain as much information as they could and then reiterate that information in a ten second sprint. They were also asked to read a document and find the answers in the text. This is important to study because it shows how men and women retain information and how fast they are able to remember pertinent information, a needed strength in our field.
Next week, I will be presenting my analysis of Galina's study 'Predict Closed Questions on Stack Overflow' so I can relate to this week's presentation.
Next week, I will be presenting my analysis of Galina's study 'Predict Closed Questions on Stack Overflow' so I can relate to this week's presentation.
Week 2: 9/6/2016-9/13/2016
This past week I spent most of my time getting ready for Tapia. I leave tomorrow morning and I am very excited! I have two interviews lined up with Bloomberg L.P. and BNY Mellon, and I give my poster presentation about the work we did in CREU last year also all on Thursday. I printed out copies of my resume and CV to pass out at the career fair and talking points for my poster presentation. More importantly, I took the time to make substantial revisions to our poster.
I also read, Synthesizing Image Representations of Linguistic and Topological Features for Predicting Areas of Attention, in order to learn more about linguistic analysis techniques combined with eye-tracking data. The results of the paper show that for a precise reading and a question answering task, the linear combination of image representations of linguistic features helps to explain the gaze
evidence of readers within the same document.
Finally, I looked into the StackOverflow support for iTrace. I determined that iTrace can only provide us with which different portions of a StackOverflow document a person looked at. It does not give you meta-data for those portions, although the capability is there.
Pictures from Tapia are to come!
I also read, Synthesizing Image Representations of Linguistic and Topological Features for Predicting Areas of Attention, in order to learn more about linguistic analysis techniques combined with eye-tracking data. The results of the paper show that for a precise reading and a question answering task, the linear combination of image representations of linguistic features helps to explain the gaze
evidence of readers within the same document.
Finally, I looked into the StackOverflow support for iTrace. I determined that iTrace can only provide us with which different portions of a StackOverflow document a person looked at. It does not give you meta-data for those portions, although the capability is there.
Pictures from Tapia are to come!
Sunday, September 11, 2016
Week 2: 9/6/2016 - 9/13/2016
For me, week 2 involved finishing up some preparation work as well as beginning some research for our project. I completed the last of the required materials that needed submitted, which included our acceptance letter and a few other documents. I also worked through two of the CITI Program's short behavioral research courses. I wasn't sure what to expect but it provided me some valuable information that gave me an idea of what working with a group of people to gather data is going to be like and some basic ethical practices, as I have never been involved in an official research project outside of the classroom. I also modified our website to include this year's information as well as completely updated the design. I added a page for this blog that includes a live RSS feed widget so that all of our information here can be viewed there as well. We plan on also utilizing our website to post some experiments and detailed information about research in order to display our progress as we move forward with our project.
Our research began this week with some preliminary review on past work with eye-tracking studies. Jenna presented a paper on an eye-tracking study that involved differences between C++ and Python in novices vs. non-novices. While I found the results interesting the research paper gave me an idea for the work we might be conducting with our own project. This week we also reviewed another paper on eye-tracking, this time the topic was on where an eye needs to focus in order to maximize comprehension while reading; specifically which linguistic features are most valuable for full understanding for a reader. I will be putting together a presentation for the group this week on that paper. There were some specific methods/considerations discussed in this paper that I look forward to bringing up with the team and thinking about whether this is something that we may able to apply in our own studies.
I look forward to keeping up with this blog, not only to exhibit my progress as we work through this project but also to keep myself organized. This will be a helpful source to look back upon as the project finishes up.
Our research began this week with some preliminary review on past work with eye-tracking studies. Jenna presented a paper on an eye-tracking study that involved differences between C++ and Python in novices vs. non-novices. While I found the results interesting the research paper gave me an idea for the work we might be conducting with our own project. This week we also reviewed another paper on eye-tracking, this time the topic was on where an eye needs to focus in order to maximize comprehension while reading; specifically which linguistic features are most valuable for full understanding for a reader. I will be putting together a presentation for the group this week on that paper. There were some specific methods/considerations discussed in this paper that I look forward to bringing up with the team and thinking about whether this is something that we may able to apply in our own studies.
I look forward to keeping up with this blog, not only to exhibit my progress as we work through this project but also to keep myself organized. This will be a helpful source to look back upon as the project finishes up.
Friday, September 9, 2016
Week 1: 8/30/2016 - 9/6/2016
As a newcomer to this project, I started off the first week completing the IRB Training Courses, and then reading about Rachel Turner, Michael Falcone, Bonita Sharif and Alina Lazar's eye tracking study, entitled 'An Eye Tracking Study Assessing the Comprehension of C++ and Python Source Code'.
It's important to examine what programming language novices in my field of study will begin learning from the start of their course work because it can be the deciding factor of if they continue to pursue a degree in the computer science field or not. The results of the study showed that there was a significant difference in the fixation rate of lines of code that contained errors between C++ and Python, and I would have to agree with the conclusion. Learning a new language doesn't come without it's frustrations, but it's retaining those programming concepts from the beginning that help in the long run.
I also assisted Jenna Wise and Dr. Sharif with multiple demonstrations of our eye tracking program at the Canfield Fair on September 4th, and more than anything, it was exciting to show participants how our technology can be fun!
I'm eager to continue developing my path as a student researcher and as well as improving my analysis techniques.
It's important to examine what programming language novices in my field of study will begin learning from the start of their course work because it can be the deciding factor of if they continue to pursue a degree in the computer science field or not. The results of the study showed that there was a significant difference in the fixation rate of lines of code that contained errors between C++ and Python, and I would have to agree with the conclusion. Learning a new language doesn't come without it's frustrations, but it's retaining those programming concepts from the beginning that help in the long run.
I also assisted Jenna Wise and Dr. Sharif with multiple demonstrations of our eye tracking program at the Canfield Fair on September 4th, and more than anything, it was exciting to show participants how our technology can be fun!
I'm eager to continue developing my path as a student researcher and as well as improving my analysis techniques.
Tuesday, September 6, 2016
Week 1: 8/30/2016-9/6/2016
It has been an exciting week back to school and back to working on this year's CREU project. I created our blog and added Alyssa, Ali, Dr. Lazar, and Dr. Sharif to it.
Tapia is right around the corner, Sept. 14-17, and I will be presenting our CREU project from last year there. So I spent part of this last week putting together a first draft of the poster. After going over the first draft of our Tapia poster with Dr. Sharif, I learned that I can put a lot less words on the poster and a lot more visual aides. The poster should be able to stand on its own, but not be overwhelming the reader to the point where they cannot get the complete idea of the poster in 30s to 1min of looking at it. In the coming week, I will be revising the poster.
Finally, I read, An Eye-tracking Study Assessing the Comprehension of C++ and Python Source Code, in order to get an idea of a study and data analysis related to the study and analysis we would like to perform for this year's CREU project. I learned more specifics about eye-tracking study designs and about Linear Mixed Effects Regression and Mann-Whitney non-parametric tests. I summarized this paper in a PowerPoint presentation for later reference. The paper found that there is a statistically significant difference between the C++ and Python participant groups with respect to the rate at which they looked at buggy lines of code, and a statistically significant difference between novices and non-novices in their code comprehension abilities for both the C++ and Python participant groups.
Tapia is right around the corner, Sept. 14-17, and I will be presenting our CREU project from last year there. So I spent part of this last week putting together a first draft of the poster. After going over the first draft of our Tapia poster with Dr. Sharif, I learned that I can put a lot less words on the poster and a lot more visual aides. The poster should be able to stand on its own, but not be overwhelming the reader to the point where they cannot get the complete idea of the poster in 30s to 1min of looking at it. In the coming week, I will be revising the poster.
Finally, I read, An Eye-tracking Study Assessing the Comprehension of C++ and Python Source Code, in order to get an idea of a study and data analysis related to the study and analysis we would like to perform for this year's CREU project. I learned more specifics about eye-tracking study designs and about Linear Mixed Effects Regression and Mann-Whitney non-parametric tests. I summarized this paper in a PowerPoint presentation for later reference. The paper found that there is a statistically significant difference between the C++ and Python participant groups with respect to the rate at which they looked at buggy lines of code, and a statistically significant difference between novices and non-novices in their code comprehension abilities for both the C++ and Python participant groups.
Subscribe to:
Posts (Atom)