Department of Writing

Introduction: FAQ

Daniel J. Royer and Roger Gilles

Soon after the publication of "Directed Self-Placement: An Attitude of Orientation" in College Composition and Communication, we began receiving e-mail. Faculty at colleges big and small were asking us questions: Do you think this could work at our school? What did you do to convince administrators to let you try this? Are you sure this works? We plan to try this next semester, can you give us some advice about talking to students? We heard from Research I Universities and Community Colleges. We heard from schools right down the road and we heard from places as far away as Israel. We knew from the reaction stirred on the WPA-L listserv prior to publication that DSP was a provocative idea people were ready to engage, and three years later we are still corresponding with faculty and continue to hear from those who are taking up the issue fresh.

David Blakesley, then at University of Southern Illinois-Carbondale, had picked up on DSP on the WPA-L and was developing a first-year writing program that combined Stretch (see Glau) and DSP before our article was in print. Thus, Blakesley and his graduate-student colleagues began immediately gathering data and developing their own history with DSP. They tell some of their stories in this collection.

Peter Elbow has offered steady encouragement for us and for the idea of DSP. He read early drafts of our 1998 CCC article and gave us many valuable suggestions. His essay, appearing as the first chapter in this collection, explains how he views DSP in relation to the larger issues of assessment.

This collection of essays is really a first response to some of the questions we have received, and the authors here raise many new questions of their own. As readers of these essays will quickly understand, this volume does not offer a final response to every placement question that DSP raises. However, it will give those schools considering alternatives to traditional placement a place to begin and, we hope, supply them with important principles and practices upon which to build.

What exactly is directed self-placement?

As the essays in this volume show, there is no single directed self-placement method. But we would say that directed self-placement can be any placement method that both offers students information and advice about their placement options (that's the "directed" part) and places the ultimate placement decision in the students' hands (that's the "self-placement" part).

At our own institution, Grand Valley State University, the direction we offer to students has taken three main forms, although we continue to adjust what we do as we learn more about our students and their decision-making-and about our curriculum. First, we send to all admitted students a letter and brochure that describe our courses and the placement decision they will need to make in order to register for classes at summer orientation. The brochure includes a brief self-inventory that asks students to consider their experience and expertise as writers. Our letter urges students to consult with parents, teachers, and counselors as they consider this important decision prior to orientation. Second, we address students during the orientation itself, basically reviewing the same information about our courses and impressing upon students the importance of their decision. We also share with them some information about past students' performance in our courses. And finally, we ask students to fill out a formal "choice card" that reports on the students' self-inventory and indicates which course they feel is right for them. We ask students to show this choice card to their assigned orientation adviser prior to registering formally for classes. If students have questions or concerns, we encourage them to discuss their situation with their orientation adviser or with a member of the writing program.

As the essays in Part Two show, however, other schools have created other ways to direct the students' placement decision. Kutztown University, for example, asks students to detail their specific reading and writing experiences over the past year or two (see Chapter Six by Janice Chernekoff). Belmont University provides brief reading and writing samples to illustrate their program expectations, and they even ask students to produce and self-evaluate a brief writing sample of their own as part of the placement process (see Chapter Five by Sims and Pinter). There are many possible variations, and obviously any schools interested in DSP need to consider their own students and courses in order to determine exactly what kind of direction will help their students the most.

Where did the idea of directed self-placement come from?

The idea of self placement has been around awhile. And people have had different experiences with it-many negative. Edward M. White writes on the WPA-L:

Some years back, a student of mine gathered substantial data on self-placement vs. test placement in six California community colleges. About half of the students placed themselves differently than the test (a pretty good one, the Cal State U EPT) would have placed them.

White doesn't at this point question the validity of test placement as a standard by which to judge self placement, nor does he distinguish in this context self-placement from directed self-placement (we do both). But more to the point, in May 1998, Mary Barkley writes in this same thread:

I used to coordinate placement testing for comp courses at the University of Tulsa. The test included a sentence combining exercise and essay. On a separate sheet, the students indicated for which section-basic, comp 1, or advanced-they were testing. Of course, evaluators did not see this sheet, but I did as I was compiling results and making final placements. I was struck by the frequency with which students placed themselves appropriately. Only a handful each year placed themselves inappropriately, always future comp 1 students who thought they belonged in advanced.

A second observation: incoming students were making this judgment without much information about the courses, only a written description. With more information and the implicit commitment of their own choices, their self-placements may have been more accurate than ours. We didn't track those who selected advanced but were placed in comp 1. [emphasis added]

Again, the question about what is "appropriate" placement is not discussed here. It is assumed that teachers reading timed essays are the creators of appropriate placement. But, again, while the notion of self placement is not novel, the term "directed self-placement" is our own coinage, and the idea of directing self-placement has precedents in common sense. For example, Doug Hesse relates this anecdote, also on the WPA-L list:

The comments about directed self placement remind me of an experience I had twenty years ago this summer. I was attending a German language and culture program at the University of Vienna with a diverse group of people from around the world. The first morning they had all of us (maybe two hundred) meet in a large auditorium for purposes of placement. There were six levels of courses.

The placement test consisted of slides full of text projected onto the screen at the front of the auditorium. The texts got progressively more sophisticated in terms of vocabulary and syntax and, even, cultural context. We were to decide, individually, whether we could read and understand the passages. At various intervals the placement folks would ask those who felt they'd reached their limit to leave the room and out of them would form classes at that particular level. We thus placed ourselves. While I have no idea how valid or reliable the test was, I remember feeling appropriately placed in my level four course, and I remember the whole process as festive and congenial and refreshingly in contrast with my preconceptions about a German-speaking university. But then this was Vienna and not Heidelberg.

It's interesting even here how-in the face of what seems so obvious-the concern for validity and reliability still pesters Hesse's recollection of the event. Blakesley's article in Chapter Two of this collection provides a thoughtful analysis of why and how the reigning institutional practices grip our imaginations and prevent us from reshaping programmatic change.

Cornell and Newton's essay in Chapter Seven describes the four-year placement study at DePauw University, including their work with DSP. In 1995 they began writing letters to students that recommended, and later, asked, "at-risk" students to choose course placement. Their study describes data on students of with different readiness predictors, gender, and ethnicity. They track success and retention of several sub-groups of students that were either mainstreamed or chose or chose not to take a two-course writing sequence. This is a detailed study with hard data that confirms the sense that others in this volume report that real student needs have been grossly out of step with our curricula, pedagogy, and placement practices. The tail has been wagging the dog until, like at DePauw, Belmont, and other programs described in the following pages, we begin to see curriculum and placement practices do an about face, now serving the exigencies of actual practice and student need instead of an essentialist ideal of student ability-categories and paper curriculums. Much still needs to change and improve, but the DePauw study argues strongly that as long as we remain wed to placement practices that mask problems in our grading, curriculum, or pedagogy, we will commit the fallacy of misplaced attention to the instruments of placement when we should be looking at our programs, coursework, and classroom practice.

In 1998, Cornell and Newton modified their letter to students a little more along the lines of what we at GVSU had begun calling directed self-placement. The origins of DSP at Grand Valley are narrated in the 1998 CCC article and need not be repeated here. Our essay in Chapter Three explains more fully the pragmatist roots of DSP. Our own concern, it seems, has been more motivated by the psychological and practical dynamics of decision-making and their consequences than by a concern for quantified proofs. In the pragmatist tradition of James, Dewey, and Peirce we have found insight for deeper development of our simple notion of directed self-placement, and we discover there as well the "attitude of orientation" that perhaps permitted the various factors and players in our situation at GVSU to give rise to the notion of DSP the way they did. Perhaps because, as "non-experts" in the placement business, we were not so deeply wed to the rhetoric of prevailing practice that we were unable to imagine possibilities outside the existing structural boundaries. More important than the question, "Where did the idea of DSP come from?" is, "Where is it going?" All of the following essays have something to say about that.

What is the relation of assessment to placement, and how does DSP fit into larger issues of mainstreaming and university standards and requirements?
The inertia of bureaucracies in motion often protect the inert ideas that fuel them and keep us from seeing the web of connections that link one aspect of what we do with another. The insistence on the strict independence of ideas is precisely what creates inert ideas, and to some extent, it is what a number of the authors in this collection believe is wrong with current placement practices. Peter Elbow's "Directed Self Placement in Relation to Assessment: Shifting the Crunch from Entrance to Exit" (Chapter One) addresses this point most directly. He points out in this essay that assessment and placement are closely related, and that learning and growth are additional factors that cannot be left out of the equation.

Even though DSP is clearly a form of placement, it probably has more in common with mainstreaming than it does with other forms of placement. The connection lies in our shared eagerness to get all students into the curriculum-to give them a chance to get started, to begin the process of learning about and becoming a part of the university discourse community they have joined. Mainstreaming seeks to place students side by side in a given curriculum, and then to give the necessary support to students who need it-peer tutoring, extra conferencing, library-skills help, or whatever. The idea is that time alone is not the main factor in helping students develop their writing abilities, so adding courses is not the only way to conceive of developmental instruction. DSP also invites all students to enter the curriculum at the same point; that is, it requires no one to take an extra course. But it does provide an extra course for those who feel that more time is precisely what they need as developing writers.

Is DSP anti-assessment-that is, an argument against our ability as faculty to assess student writing fairly and accurately?
DSP argues against our ability as faculty to assess student writing fairly and accurately-during the placement process. Fair and accurate forms of authentic assessment are certainly within our reach, and we wholeheartedly endorse them as part of classroom and program assessment. Indeed, our own commitment to a program-wide portfolio grading at GVSU, a program portfolio system in which final student portfolios are team-graded by teachers at the end of each semester, convinces us of the impracticality of replicating such authentic assessment measures as part of the placement process. As we argue in Chapter Three, we would much rather put our admittedly limited time and energy into providing authentic assessment to students at the end of our first-year composition program, when the students have had a chance to get to know the university and its culture, to see and understand the writing expectations of our program, to draft a range of papers in response to a range of assignments, to give and receive feedback from other student writers, to work closely with a teacher for at least one full semester, to revise and edit their papers, and finally to select their very best work for submission for grading. Far from being anti-assessment, then, DSP seems to us to be strongly in favor of assessment-so much so that we insist on doing it only when we can do it well.

Is DSP just a cheap substitute for better, more resource-intensive forms of placement?
We note in our own essay that authentic inquiry begins with real doubt. Real and serious doubt is what finally goads inquirers to action, not cost savings. The cheapest form of placement uses a static index such as ACT-English score to place students. But those schools that have begun experimenting with DSP have not been motivated by mere efficiency. Kutztown University is a good example of the more common reason for looking into to DSP, namely, doubt. Janice Chernekoff describes a not uncommon placement program, one involving ACT scores and direct writing assessment. She also describes a "lack of belief" in the placement system. For no blatantly obvious reason, deep suspicion and lack of confidence grew up around a placement program that gradually became viewed as "arbitrary, subjective, and punitive by both students and parents" (6:3). The situation there was no different (in some ways better) than many other placement programs, yet even without validity and reliability statistics to back them up, everyone had an intuitive sense that nobody really believed in what they were doing. Readers of Chernekoff's essay will learn how DSP has changed things at Kutztown and that "resource intensive" does not equal better placement.

Money is often easier to come by than enthusiasm, belief, and commitment. Blakesley's essay (Chapter Two) will remind readers that DSP is far from a cheap, self-regulating, auto-pilot placement system. Furthermore, Tompkins notes in his essay (Chapter Nine) that DSP leverages some of its potency through the Hawthorne effect. He writes, "I can assure you that when I stood before those students who were taking placement tests, they responded very well to me in large measure because for perhaps the first time in their academic lives, and certainly the first time during their placement test experience at JTCC, someone took a few moments to explain the procedures and the rationale for them, to inform them about what they might expect from JTCC writing courses, and most importantly, to allow them the rare opportunity to make their own choices about their academic future" (9:7) The challenge for those using DSP, as Tompkins goes on to point out, is to maintain this level of attentiveness and concern, invoking the Hawthorne effect for each incoming student, thus taking advantage of the positive consequences that showing a serious interest in students affords.

Placement should be related to curriculum and pedagogy. This is why many schools have been trying to use portfolio placement. How does DSP relate to curriculum and pedagogy?
Several of the essays in this volume comment on the relation of DSP to pedagogy and curriculum. But the essay by Sims and Pinter shows how far short a well-run, timed essay-carefully monitored and meticulously scored-can be from the ideals involving "sharing power, forming relationships, fostering reflection" as the subtitle of their essay forecasts. At Belmont University, DSP replaced an exam placement program because the faculty wanted to align the placement process with the writing program's philosophy, pedagogy, and practices. Even contemplating DSP shifted focus and discussion among faculty to students' perceived needs and current course offerings. DSP was adopted with a one-hour companion course that students might choose to take. And in a wonderful shift, "the placement test became the placement process" (5:3) as the time that used to be spent testing was replaced by time spent communicating program standards and expectations to students who were now invited to take responsibility for much of this new placement process.

If a school's placement method serves as an indicator of what it values about writing, and indeed how the school conceives of writing in the first place, what kind of a message about writing does DSP send to high school students, their parents, and other tax-paying citizens?

Phyllis Frus writes in her essay (Chapter Eight) that , DSP "seemed to offer us some of the pedagogical and other benefits of portfolio placement, such as the ability to communicate the kinds of writing we expected students to have done in high school. . . . administrators and faculty also looked forward to fewer appeals of placement decisions, a less-resistant population of students in the developmental writing course, and an entering class with morale improved by having been given the privilege of as well as the responsibility for making an important choice" ( 8:3).

And, as we point out in Chapter Three, DSP sends an important message about the importance of self-assessment in writing generally. If the old message sent by the direct assessment of writing during placement was, "At the university, we value good writing," we now add "and students' ability to assess their own needs and abilities as writers." With DSP, we continue to say "we value good writing" by highlighting the place of writing in our curriculum and sharing the expectations of the first-year writing program; we say "we also value students' ability to assess their own needs and abilities as writers" by asking them to assess their own needs and abilities as writers!

Moreover, in recent discussions about DSP, the concern is sometimes raised that DSP sends the message that writing teachers can't distinguish between good and bad writing. Indeed, the field of writing studies often seems not nearly so interested in talking about how to assess student writing at the end of the term as it does how to place students based on a single piece of writing or a portfolio at the beginning of the term before much at all is known about the students' writing ability. The purpose of placement ought to be about determining how ready a student is to enter a given curriculum given a student's readiness factors such as writing skill, motivation, grade expectations in the course, the difficulty of the course, and the student's attitude toward the work. Unfortunately, no matter how expert faculty become at reading and scoring placement essays, these readiness factors are very difficult to divine from a portfolio of writing, let alone a single placement essay. For, again, the task is not so simple as merely determining the quality of a particular essay (is it good or bad writing?) but rather the task is to project-without knowing anything about or even laying eyes on the student-where that student should begin in a curriculum in order to eventually pass the regular, required course. Faculty "reading the leaves" in portfolios or student placement essays will undoubtedly, after such effort, believe they are sorting students properly, but there is little evidence that trained readers can do more than predict who will excel and who might fail a course while giving no attention at all to these other readiness factors. Indeed, those who argue about the accuracy of a placement method generally minimize the importance of final grades, preferring instead to survey teachers about their students' placement a certain number of weeks into a course. Too many variables go into final grades, they argue, to consider them an indicator of the accuracy of a placement decision. This is precisely our point: if the standard of "accurate" placement is a teacher's view of the student's ability to write that is somehow purified of these other readiness factors we mention above, then we end up with a very reductive view of students as learners. These "too many variables" are precisely what we believe must be anticipated in the directing aspect of self-placement. Of our own school's experience, we write in Chapter Three, "trained, timed-essay graders of a two-hour exam could not place students in such a way that gave them any particular advantage or disadvantage in passing with a C or better the regular first-year writing course." Note the emphasis on "passing." We have yet see any evidence to the contrary at any other school. Students, parents, and taxpayers are less impressed with inter-rater reliability among essay readers than they are seeing students get started in the curriculum where is most appropriate for each student.

Aren't the least capable students-students who need basic writing-the very students who are least capable of choosing the right course? And don't placement decisions based on student self-perception create biases against certain populations-women, for example, or minorities?

In fact, there is convincing research demonstrating that self-efficacy (task-specific confidence) has a positive correlation with actual writing ability. Erica Reynolds synthesized that research Chapter Four, "Self-Efficacy and Directed Self-Placement: Apprehension, Confidence, and Gender Components," and notes also that students who are highly apprehensive (lack confidence) also report weak self-efficacy. Her summary of research indicates that writing apprehension is predictive of what students will attempt with regard to writing to begin with. This finding supports the idea that students with high writing apprehension will be less likely to enroll in an upper-level writing course.

Furthermore, Reynolds reports that
Dale Schunk addresses this phenomena in, "Self-Efficacy and Academic Motivation," pointing out that self-concept, which incorporates self-esteem and self-confidence, is "hierarchically organized, with a general self-concept at the top and subarea self-concepts at the base" (212). He maintains, based on Bandura's social cognitive learning theory, that of self-concept's various dimensions, self-confidence seems the most akin to self-efficacy and that "in the hierarchy, self-efficacy judgments would lie at low levels because they are generally construed to be domain specific," i.e., Algebra or Geometry (212). In other words, even within an academic area, a high self-concept does not imply that students feel highly confident about their abilities in all academic areas, or necessarily confident in general. (4:15-16)

Studies she cites say that females have more positive attitudes about writing while males have less positive attitudes and experience more apprehension. This research supports the evidence reported by several programs using DSP that males place themselves in basic writing courses at a higher rate than do females. In that one of the goals of a basic writing course is to improve skills and confidence, this is certainly an appropriate placement.

The research Reynolds' essay reports on helps us to understand how student apprehension, confidence, and gender factor in to the dynamics of directed self-placement.

Doesn't DSP change the basic-writing population-and thus change the basic writing course itself?

Indeed DSP can upset the status quo of our programs and curriculum. With students participating in the placement decision, it becomes more important to respond to students' real and perceived needs. DSP caused us at GVSU to face the fact that if nearly all students could pass the regular, required first-year writing course, then we-as administrators and faculty-faced an unwelcome ethical dilemma if we told them that our placement will not allow them to take this course-one we knew they were very likely to pass. Rather than face the real problem (e.g., Grade inflation? Weak curriculum? A freshman class with such a high profile that a basic writing course is simply not needed?), some programs may be asking students to shoulder the burden of our own administrative confusion or inability to fix other problems with our programs. Is it easier just to keep placing students in basic writing courses than it is to create standards for passing the course that in fact make a basic writing course a real necessity instead of a theoretical given? Has our basic writing course become such an economic necessity that eliminating it or radically changing it to respond to the "reality therapy" of DSP is more difficult than simply insisting that a certain percentage of students take the course?

The writing center at the University of Michigan faced a number of these difficult dilemmas. The seven- to eight-percent that they were placing in the practicum course, according to DSP, was much higher than needed. The freshman profile at UM is quite impressive, and students arriving on campus had a very difficult time viewing themselves as writers in need of extra help. The essay by Phyllis Frus boldly faces many of these tough questions raised by DSP.

Finally, DSP asks the question: what helps and what hurts students? If students pass the regular class, and if they do so happy, confident, and without trauma, then they didn't need a basic writing course. Now if they pass the course but still can't write well, then we don't have a placement problem, we have a curriculum or teaching or grading problem. It is precisely the strength of DSP that it points us towards these real problems and keeps us from making our students shoulder what are really our problems as faculty and administrators. It is an ethical issue. The last thing we want DSP to do is prop up basic writing courses that students don't need. This would be self-serving. The students at UM, by all objective standards, are the cream of the large-school crop (97% have a high school gpa of 3.0 or higher, an ACT of 26-30, and they report very high self-confidence). It would not surprise too many people if very few of them needed a basic writing course. Furthermore, if there are no obvious drawbacks to having students go right into first-year writing courses, it is very difficult justify a traditional basic writing course. It may often be the case that DSP forces us to shift our over-concern with getting more students to take the basic writing course, and refocus it on how can we design a companion course that meets real needs students have so that they will be knocking on our door, not us on theirs.

The essays by Sims and Pinter on Belmont College, and Cornell and Newton at Depauw illustrate how this dynamic can quickly change a curriculum. Cornell and Newton write: "But the more data we collect, the clearer it becomes that mandatory placement of such students in our first course is not justified. For that reason, directed self-placement has become for us not an accommodation but a matter of principle" (7:1). The essay by Frus shows that when writing centers are involved, DSP causes us to examine honestly the kind of services we offer and, given what is needed to do well enough to pass the regular first-year course (in a way that pleases students, not placement experts, for many students will happily risk a C grade before they will take an extra course-a risk many of our prevailing placement programs blatantly ignore in their limitation of targeting only the very good and the very bad), causes us to reevaluate the kinds of programs we offer and the reasons for doing so.

What if students choose the wrong course? Haven't we failed in our job of placing them into the course they really need?

To us, this is similar to asking, "What if students choose the wrong major during their freshman year, and then end up switching to another major later? Haven't we failed in our job of helping them set themselves on a productive career path?" Switching majors seems to be a part of the learning or growing process of many college students. It can sometimes set students back, perhaps by requiring additional coursework or by adding new GPA requirements. But to call the first declared major a "mistake" would be to deny the process that students often undergo as they arrive at their life's work-or at least their initial life's work. It would also assume that every student has a "true major" that is only to be found, preferably very early on. This view simply does not square with experience.

Likewise, questions about the "wrong course" or "improper placement" assume some kind of "true placement" against which a placement method must measure itself. Proper placement-being in the "right" course-is a complex matter of deep context that includes not merely the student's writing ability at the beginning of the semester, but more importantly the student's opportunity for success in the course, a prediction that is very difficult to make accurately based on a single piece of writing or even a whole portfolio of writing no matter how much expertise and training we give our faculty. For example, what does a whole portfolio of writing tell us about the motivation and interest-level of a student who enrolls with the thought that she intends to put all of her energy this semester into her calculus class and just "get by" in everything else? Perhaps for such a student the "right" class is a basic writing class as a gentle alternative to the rigors of the regular class. We will never know if we rely only on faculty reading essay exams or portfolios of student work. A multitude of other examples could be given.

So does DSP work? Does it place students correctly?

This of course is the bottom line. This is the question that itself raises so many important questions. What is "correct" placement? What do we mean by "works"? The DePauw study in this volume supplies a four-year study of the academic success and persistence at DePauw of their "at-risk" first-year students, some of whom chose a basic writing course and some of whom chose to move directly into the regular, required course. These students were also compared with a cohort that were mainstreamed. Chapter Ten also provides data indicating how DSP has "worked" on the Southern Illinois-Carbondale campus-as do several others. But all of the chapters in this volume ultimately address this key question, even those that seek to expand or revise our notion of what it means for a placement method to work. So there is no simple answer. Our collection of answers follows.

Works Cited

Glau, Gregory. "The 'Stretch Program': Arizona State University's New Model of University-Level Basic Writing Instruction." WPA: Writing Program Administration 20 (1996): 79-91.

Page last modified January 29, 2007