Wednesday, March 31, 2010

Feeling "enough time" is not related to actual time starting to write

I never asked the students in my study whether or not they procrastinated. But I did ask them if they felt they'd spent "enough time" on their papers.

See, 80% of procrastinators said they'd spent enough time, while 93% of non-procrastinators said they had. That makes sense, right? If you procrastinated, you're less likely to think you'd spent enough time on your paper. Except that the 93% of non-procrastinators is misleading. Of that 93%, several gave responses on the followup question that contradicted their answer on the "enough time" question. Only 83% of non-procrastinators gave an unequivocal "Yes, I spent enough time on my paper." And that's not out of line with the 80% of procrastinators.

And remember, all we're saying here is that non-procrastinators were more likely than procrastinators to say they'd spent enough time. The large majority of procrastinators still felt they'd spent enough time on their papers.

Here's the cool part. Since there were only five procrastinators in my study, that 20% who said "no" is only one person. This person reported starting to plan 21 days before the paper was due, the maximum of any procrastinator. And she started to draft 3 days before the paper was due. Also the maximum of any procrastinator. So the procrastinator who reported starting on her paper the earliest is also the only procrastinator who doesn't think she spent enough time on her paper.

Monday, February 22, 2010

Moving on to the Results and Discussion

I'm mostly done with my methods chapter now. So I get to move on to the fun parts--results and discussion.

I'm a bit concerned that a barrage of numbers will turn off readers trained in English, so I'm going to try to combine results and discussion. Otherwise I don't know if anyone will actually look at my pretty histograms. (If you want to make a histogram in Excel, here is the best explanation. It's written for an out-of-date version of Excel, but it's so much clearer than version-correct explanations that it's still the most usable set of instructions for Excel 2007.)

So I'll need a section in which I cover the results from my questionnaires. Some of this information is quantitative and lends itself to tables and charts. For instance, I definitely need charts that depict when the students started their papers. I'd like to be able to show on one chart both times planning and times drafting. Maybe some distribution curves? I'm not sure.

I'll also want to have a chart that compares proofreading of procrastinators and non-procrastinators.

But some of the information from the questionnaires doesn't chart well, like "If you feel you should have started your paper sooner, why is that? " So I'll also need to discuss what kinds of answers students put down for open-ended questions.

Then the next section covers the data I pulled from the students papers--the length and the numbers and kinds of errors. I have several charts for this section already, but there are a few more in the works. I also need to do some regression analysis. Or at least find where I wrote down the regression analyses I did before...

Then the final section will be case studies of the actual procrastinators' papers. I didn't plan to do this when I designed my study. But since I ended up defining only five participants as procrastinators, it made sense to look at their papers more in depth. (See here, here, here, here, and here for what I posted previously.) I'm a big fan of quantitative data, but I think the case studies are useful for a few reasons. One, while the quantitative data can help show whether there are differences between the performances of procrastinators and non-procrastinators, it can't give much sense as to why that is. I could just speculate. For instance, since procrastinators appear to have fewer performance errors than non-procrastinators, I might speculate that procrastinators are just using simpler sentence structures. But instead of just speculating, I could actually pull out Mort's paper and note that he actually used complex sentence structures which got him into trouble with subject-verb agreement. Another reason I think case studies are a good idea is because people in the English department might actually read them instead of glossing over them.

Friday, January 22, 2010

Coding Surface Errors

So what is a surface error, anyway? It's usually the kind of thing a student would call a "grammar mistake." It might be a misspelling, a comma splice, or a problem with number agreement. When I coded my data, I decided the best way to standardize my coding was to code those things that handbook listed as an error. Since the department required us to use the sixth edition Diana Hacker's "A Writer's Reference," it was the obvious handbook choice. I knew the handbook well, and I knew that it was a required text for the classes I was studying. I read through each paper marking errors according to the section Hacker covered them in. This method isn't perfect. Some "errors" are actually stylistic choices. But interviewing each participant about each error was out of the question, so I marked them as errors anyway.

Then I counted the errors in each paper. But I figured it was the concentration of errors that was important, not the absolute number. That is, if a student wrote a single paragraph which contained three errors, that translates into a higher concentration of errors than a student who wrote a ten-page paper with the same number of errors. So I calculated each student's average number of errors per 100 words.

The really tricky part is that I decided to divide the errors into "cognitive errors" and "performance errors." Cognitive errors follow a certain mental logic and are fairly consistent. A student may consistently write "a lot" as "alot." Performance errors are slip-ups. A student may type "they" instead of "the." I hypothesize that procrastination should show stronger correlation to performance errors than to cognitive errors. Performance errors may pop up when a student is rushed and doesn't proofread. But cognitive errors are difficult for students to catch in their writing, since they don't usually realize that they are making a mistake at all. So proofreading is unlikely to help, unless the student gets outside help from a friend or tutor.

So how exactly can I look at a grammatical mistake and know what caused it? How do I know if the student merely made a mistake or whether they misunderstood the rules of writing?

Two ways. One, I looked for patterns. If a student always puts a comma before prepositions, that's a cognitive error. They're clearly using a rule, even if it's not the right rule. On the other hand, if a student only once fails to capitalize the word beginning a sentence, there's no pattern. It's not likely to be a cognitive error.

The second way I determined if errors were cognitive was based on my experience as a writing tutor and teacher. I knew that certain errors were more likely than others to be cognitive errors. For instance, students who violated punctuation rules when joining independent clauses were often obeying the "comma when you take a breath" rule. Knowing that certain types of error were quite likely to be cognitive allowed me to classify an instance as cognitive if a pattern wasn't totally clear. But if I couldn't tell, I classified errors as performance errors.

I also calculated cognitive and performance errors per 100 words.

I Remember Coding Data

So I'm working on my methods section. But, you see, it's been a couple of years since I actually conducted my study. I've moved twice since then. So my memory is fuzzy and my records are poorly organized. It took me over a week before I tracked down the questionnaire I used. Now my next goal is to write about how I actually coded the data. Luckily, I put a great deal more thought into coding than I did in formulating the questionnaire. And luckily I recorded at lot of those thoughts on this blog. So I don't have to reconstruct quite as much.

Basically, I looked at three things in the students' papers I collected. Length, surface errors, and use of evidence. So let's start with the simplest one, length.

Length is pretty straightforward. I counted the number of words in the papers. Some of the students who wrote the papers probably weren't familiar with this method of determining length, since they'd enlarged their margins and font size to make their papers physically larger. I did the word counts by hand, since I had hard copies of the papers. I didn't count words that were part of citations or headers. Only the actual body text. I did count words that were part of quotations.

I felt that length might not compare well across classes, because even though the three instructors were working with a pretty standardized syllabus, they might have had very different ideas of what length requirements meant. An instructor who automatically failed any paper which didn't meet a minimum length would likely receive longer papers than an instructor who considered length a guideline to help students figure out how much depth to go into. If such differences existed between the instructors of the classes I studied, I didn't want them to overshadow differences that might be associated with procrastination. So I found the median length of paper in each class in addition to an overall median length for all the papers. I considered a paper short if it fell in the first quartile of length in its class and long if it fell in the fourth quartile.