top of page

On Counting Beans: A View from the Periphery


“We were born with opposable minds, which allow us to hold

two [or more] conflicting ideas in constructive, almost dialectic tension.

We can use that tension to think our way toward new, superior ideas.”

– Roger L. Martin

A recent call to reflect on writing-center outcomes assessment came nearly two decades after Lerner’s “beans-counting” articles were published and at a time when our classes of language-teaching professionals-in-training were examining the need to assess the outcomes of language-support programs, in light of the growing need for professional assessment. This call and our examination of outcomes assessment together brought me back to my original excitement and conviction about needing to respond to Lerner’s call and the calls of others since the 1980s for more rigorous assessment and also to my continuing feelings of disillusionment nearly five years after my own endeavor to contribute to the field.

Let me first share where I am coming from and the context of my personal reflection. My first contact with writing center-related work was in 2000. Since then, I have worked both at and with writing centers or language-support units directly (through regular appointments) and indirectly (by conducting workshops and research). The disclaimer I’m making is that I consider myself a peripheral participant in writing-center research because it is not a main research focus for me. Yet writing center work has always held a special place in my heart, and it is through such work that I have been involved in developing, implementing, and evaluating programs and workshops over 15 years, in addition to my current work of training future language-teaching professionals and carrying out research on second-language learning and teaching. Most recently, I have also become involved in service to the Research Grants Committee of the Council of Writing Program Administrators.

Flashback to 2009: After the generically named “The Writing Center” was established at my university and assessment of academic language-learning needs was implemented in 2007-2008, I proposed a project to assess the outcomes of multiple components in our university’s language-support unit. The project was supported wholeheartedly by the university’s Director of the Learning and Teaching Center. Did our center demand or need this evidence to sustain the language-support unit at the time? – No. Could the center carry on its work, as thousands of other such units across institutions did and still do, without this evidence? – Yes. Yet, my conviction of the need for evidence-based approaches and research-guided pedagogy led me to pose the ultimate, age-old questions that arise for any of us who have been involved in the writing center field: Are we having the effect we want? Does what we are doing matter?

So I proceeded with great enthusiasm, reading up on and being inspired by the work of prominent writing-center assessment figures and carrying out a year-long study looking at the center’s usage profile, learning outcomes, writing and writing center-related perceptions, and satisfaction with received support and services. The center’s usage profile component was the first, to my knowledge as indicated in my literature review, to go beyond a crude calculation of key variables, or what Scott Pleasant called “a numbers-served approach” (10). Specifically, I looked beyond the conventional total user counts to examine the number of different users by month, and the number of unique users over an academic year according to such key variables as gender, department, degree program, division/discipline, year of study, purpose of visit, for which course or for what writing needs, language background, and type of visit (e.g., drop-in vs. appointment). I conducted correlational analyses to reveal the relationship between time and usage counts and to ascertain the strength of trends over time. I ran chi-square tests to determine whether differences in usage were significant according to each key variable. With my in-depth analysis of center usage, I hoped to make the case that this type of evaluation not only provides a fuller picture of who, when, and why our students use the center, but also helps us (administrators and practitioners alike) to (a) determine whether the center is serving the needs of the students it is intended to serve, (b) understand how our limited resources are being utilized, (c) develop plans for staffing and training based on current and projected usage patterns, (d) reveal the center’s current and projected developmental trajectory over time regarding those key variables and whether it is dealing with demographic changes and working in sync with the institution’s recruitment plan, and (e) contribute to Lerner’s point about our field’s lack of “standards” that we can use to gauge our own effects (Huang, “Forest of Forests”).

The result of these efforts was a rarely seen, in-depth profile of a language-support unit that I presented with high hopes of encouraging more such endeavors that would allow writing-center administrators and researchers to develop repositories such as the one maintained by the Writing Centers Research Project at the University of Arkansas; from these collections of sizable amounts of data we could then draw meaningful comparisons across comparable universities. Yet the entire process of trying to disseminate the results from the center profile component took over two years. The editors of one primary publication for disseminating writing center research considered the results as not “appropriate” for publication, oddly stating that the article was “a re-articulation of a best practice in writing center studies” and that “it is common for writing centers to engage in just this sort of statistical analysis” (emphasis mine). The editor of a second publication was unable for more than a year to find a suitable second reviewer with relevant expertise. This indifference of people in our own field and lack of qualified reviewers to properly assess such work dampened my enthusiasm for advancing (or so I viewed it at the time) the analysis of center-usage profiles that every writing center regularly does in some form or another, rigorous or not; it also burst my first bubble, as I questioned whether the field really is ready for “rigorous assessment” (see also Pleasant) or is truly looking for change.

Did the process get easier with the second paper on three other components (learning-outcomes assessment involving over 330 drafts across disciplines, a writing and center-related perception survey, and a campus-wide satisfaction survey), which was shared at a scholarly conference and through the traditional route of journal publication? Not much. My presentation of the findings at the AAAL (American Association for Applied Linguistics) annual conference, the leading conference in my home discipline, was easy and well attended because of a long-standing tradition and well-established focus on assessment. But when it came to journal publication, I struggled over whether to submit these findings to a journal in my home discipline or to one read by those who could directly benefit from the research. I chose the latter, and, though the review process again took longer than expected, the second anonymous reviewer stated some six months later that “I want to commend the piece. It is articulate and lucid. The defense of multi-method research design is hard to fault. The review and use of past studies is solid. And the scope and labor of the research study can only be admired” (Huang, “Are We Having the Effect?”).

The lengthy time it took for peer review aside, I should feel enthusiastic about continuing this line of research and advocating what I believe is fundamental to establishing the legitimacy of writing center research from the administrative, research, and professional perspectives. I should also be enthusiastic about transforming perceptions and elevating the work of writing centers not just as service units, but as an integral part of every student’s entire educational experience. These goals are lofty, I know. Yet interestingly, when asked about whether I had considered or would consider conducting more outcomes assessments of language-support units, I found myself hesitating.

As I hesitated, I also reflected on how my institution’s language-support unit responded to my initial sharing of results and recommendations both pre- and post-publication. The reluctance was certainly there, and naturally so, because who would not resist an outsider crossing into their territory and revealing through research all the strengths and weaknesses of their operation, even though overall the results were quantifiably positive? I also reflected on the deflating process of disseminating the findings (though such experience is certainly not unique to this field of research) both within and beyond our institution through presentations and publications. I asked myself: Has writing center research in fact witnessed a growing body of research recognizing the importance of “writing program assessment” in response to the 1980s call or since Lerner published his seminal articles? Or is the drumbeat for writing-center assessment merely lip service researchers are paying? Are writing center professionals truly ready for what they, or their institutions, have been asking for?

From the perspective of a peripheral participant in writing center research, I would like to share briefly, based on my personal experience and involvement with both the research and professional sides, a few central issues that I believe have kept this area of research from moving forward.

1. A Love/Opportunity – Hate/Threat Relationship: Few would argue about the need for and benefits of conducting assessments, but any research also has the potential of producing a negative result, or, after all that effort, no significant results at all. Then what? As has been said, nonsignificant results do not mean that the results are not significant. But how might the results be interpreted in terms of funding support? Although it is widely recognized that assessment helps identify the strengths and weaknesses of a unit, would the bean-counters (i.e., university administrators) institutions be able to see beyond the limitations of a study whose results are negative or nonsignificant?

No single study can address all questions about the efficacy of any program or unit, nor can any single study fully measure a program’s or unit’s impact at all levels. No research method is inherently superior, and one should guard against putting the methods cart before the research-question horse. The perceived “objectivity” of quantitative studies must be considered against other sources of data and the rigor of the methods employed. Institutional support for ongoing, multiyear studies is therefore needed for program administrators to feel safe about fully engaging in the learning they do and about the process and product of the outcomes assessments they choose to pursue.

2. Research Expertise: The lack of expertise no longer in just quantitative methods, but in complementary, multiple methods or well-integrated mixed methods as well (see, for instance, the works by Creswell or Burch and Heinrich) and in the measures researchers can take to enhance quality, regardless of methodological approach, has also hampered efforts to advance the field. This lack extends beyond the expertise needed to conduct the research, for expertise is also needed to review works submitted for publication.

One way to enhance methods expertise within the writing-center community would be pre- or post-conference professional-development workshops on elevating the methodological know-how of both practitioners and administrators. Collaborating with other practitioners or researchers from other disciplines within or outside one’s institution would be another way of circumventing commonly raised concerns about expertise. Since this approach would involve relinquishing a certain degree of control, which many writing-center administrators may find hard to do, it thus would require establishing trust among the key players involved in assessment. A case in point was the assessment project I conducted as someone not working or affiliated with my institution’s writing center. From the university administrator’s perspective, having someone from outside the center and with the expertise needed for such work lent a degree of objectivity and rigor to the assessment outcomes. But this perception may not be shared and, in my case, naturally caused some feelings of reluctance and uncertainty, for reasons previously mentioned, on the part of the center administrator and staff working at the writing center.

3. Criteria for Assessing Quality: The criteria used to assess the quality of writing center research must consider the unique context and nature of its work. This was one of my biggest lessons from my project, one that I’d like to reiterate in terms of evaluating work in this specialized area of research. Issues involved include entrenched biases about what constitutes “scientific research” and the measures used to assess quality. One key issue raised by one reviewer had to do with inter-rater reliability (i.e., the degree of agreement among raters) associated with three experienced raters rating over 300 drafts. While the reviewer’s judgment or criticism of the reliability values was the “norm,” I appreciated the opportunity to make the point that the values also revealed the realities and challenges that writing-program administrators face when assessing outcomes owing to the very nature of the work that writing centers deal with. The few published studies able to achieve acceptable inter-rater reliability did so, for example, through narrowing the types of texts from a particular course. As we all aware, most writing centers deal with work from a wide variety of disciplines, including multiple types and genres of texts. The evidence of effects must therefore be gleaned through other analyses, as explained and evidenced in the article (Huang, “Are We Having the Effect?”).

Writing center research has been dominated primarily by qualitative research. Given the multifaceted and interconnected variables within the sphere where writing center work takes place, the qualitative orientation has its place for examining questions through the interpretive lenses needed to capture the complexity of such work. And, as we all know, quantitative research has its own limitations in identifying subtleties and complexities through more positivistic enquiries. But quantitative research also has its place. With the demand that service programs demonstrate objectively measurable outcomes, Reardon’s shared sentiment that “no writing center administrator can ever rest too comfortably regarding his or her center’s continued support or funding” (par. 2)—as evidenced by what recently happened to Wilfrid Laurier University’s writing center (“Wilfrid Laurier University cuts”)—reminds all writing-center administrators of the need for positivist understandings of the measuring of effects. This reminder inadvertently leads us once again to reevaluate the body of research and the methods researchers have used in assessing writing-center work. Certain widespread, deep-rooted perceptions and prejudices against qualitative studies that researchers have worked extremely hard to dispel, compounded by the methodological expertise issues mentioned in point 2, mean that those who are contemplating or have embarked on assessment work find themselves between a rock and a hard place. Moreover, the new demand to enter unfamiliar methodological territory also poses potential challenges to researchers’ intellectual self-identity.

Has the field of writing-center outcomes assessment risen to the occasion nearly two decades after Lerner’s “beans-counting” articles? I will let you be the judge. Granted, there are more writing-center administrators, practitioners, and scholars engaged in outcome assessments since that time, most notably the work of Ellen Schendel and William Macauley. Yet, given the number of writing centers or language-support units in schools, colleges, and universities across North America, the reflections in the May/June issue of the Writing Lab Newsletter seem to suggest that the field remains mainly about calling more of us to get involved and dispelling trepidations about conducting rigorous assessment of writing-center work (see also Pleasant). Lerner’s words still hold strong relevance today: “Assessment does not have to be shrouded in mystery. It is an activity that all of us can do and, in fact, should do if writing centers are to continue to develop both individually and as an academic field” (“Choosing” 3).

Counting beans and making beans count are important from multiple standpoints: administrative, programming, best practices, scholarly. Today we still need to move beyond perceiving the assessment practices of language-support units as means and ends that are rarely evaluated critically, to openly discussing the elephant in the room and rigorously evaluating whether what we do does indeed work and matter. We must, on one hand, recognize and celebrate the progress, albeit at glacial speed, in this particular area in the past few decades, and, on the other hand, we must, importantly, not forget to take heed of lessons about following the swings of the pendulum—in this case, downplaying qualitative evidence while advocating quantitative approaches. We must be willing to consider and acknowledge that, for any research field to thrive, those involved must help contribute to advancing knowledge and practices by making well-informed methodological choices that include all relevant modes of inquiry appropriate to critically studying questions that either are unique to the writing-center field or are applicable across various research areas within the field and situated within unique institutional contexts. We must also never cease questioning the definition of quality and who defines it in our quest to firmly establish the work of writing centers and other language-support units as “legitimate” contributors, like any academic unit, to our students’ educational experience. And we must do the same in establishing such research as an independent and robust field that can inform policy, programming, and practices, as well as stand up to scrutiny by experts from different methodological traditions.

Works Cited

Burch, Patricia, and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation. Thousand Oaks: SAGE, 2015. Print.

Creswell, John W. A Concise Introduction to Mixed Methods Research. Thousand Oaks: SAGE, 2015. Print.

Huang, Li-Shih. “A Forest of Forest: Constructing a Centre Usage Profile as a Source of Outcomes Assessment.” Canadian Journal of Education 35.1. (2012): 199-224. Print.

---. “Are We Having the Effect We Want? Implementing Outcomes Assessment in an Academic English Language-Support Unit Using a Multi-component Approach.” WPA Journal 35.1 (2011): 11-45. Print.

Lerner, Neal. “Choosing Beans Wisely.” Writing Lab Newsletter 26.1 (Sept. 2001): 1-5. Print.

---. “Counting Beans and Making Beans Count.” Writing Lab Newsletter 22.1 (Sept. 1997): 1-5. Print.

O’Dwyer, Laura, M., and James Bernauer. A. Quantitative Research for Qualitative Researcher. Thousand Oaks, CA: SAGE, 2014. Print.

Pleasant, Scott. “It’s Not Just Beans Anymore; It’s Our bread and Butter.” Writing Lab Newsletter 39.9-10 (May/June 2015): 7-11.

Reardon. Daniel. “Writing Centre Administration: Learning the Number Game.” Praxis: A Writing Center Journal 7.2 (Spring 2010): n. pag. Web.

Schendel, Ellen, and William J. Macauley. Building Writing Center Assessments that Matter. Boulder: University P of Colorado, 2012. Print.

“Wilfrid Laurier University cuts 22 positions amid financial difficulties.” CTV News Kitchener. 10 March 2015. Web. 10 March 2015. http://kitchener.ctvnews.ca/wilfrid-laurier-university-cuts-22-positions-amid-financial-difficulties-1.2272599.

 


Featured Posts
Recent Posts
Archive
Search By Tags
No tags yet.
bottom of page