Methods

Context

The surveys reported on in this chapter were carried out at a large, public land-grant university in the northeastern United States. Since it is a research university, tenured and tenure-track faculty are expected to write consistently as part of their research agendas, but for other university employees—including graduate students and staff—non-research writing claims an equal or greater part of their time. Undergraduate students encounter writing instruction primarily through a first-year composition course and two “writing-intensive” courses housed in an academic discipline, though faculty do assign (often less substantial) writing projects in other courses across the curriculum as well. While recently the first-year writing program has revised its curriculum to include multimodal composition in all course sections, there is no expectation that students produce multimodal work in writing-intensive courses; in fact, according to the university’s general education guidelines, the primary criterion for writing-intensive courses is that “a minimum of 15 revised and edited pages must be assigned,” which suggests that writing is assumed not to include genres that cannot be arranged on printed pages. Writing-intensive courses are supported by the university’s writing center, but expanding faculty’s definitions of writing has not been a priority of the writing center’s directors.

Study Design

This study was designed to determine the broad contours of what writing looks like in diverse disciplines at this research site, and especially the prevalencepurposes, and value attached to multimodal composition across the university. I opted to use survey methods in order to cast a wide net and gather data from a large number of respondents, even if the data would not ultimately allow me to capture individual textual examples with any depth or precision. The main information these surveys sought to collect was a fairly comprehensive accounting of the kinds of media faculty and students use when they compose. The questionnaires I designed then also asked questions about how often and under what circumstances the text types respondents selected were created. I took this approach because, as noted in the literature review, there are at present more qualitative case studies of multimodal writing in the disciplines than there are surveys, even though curriculum decisions may need to depend more heavily on large-scale data than situated thick description.

The one notable exception is Reid et al.’s (2016) survey-based study summarized in the literature review. Since my study’s objectives were closely related to Reid et al.’s, I designed my study as a replication study—which, as Raucci’s (2021) recent nuanced CCC essay points out, does not mean it uses the exact same methodology, but instead that it pursues similar questions using comparable methods, which allows us to meaningfully compare conclusions. Haswell (2005) has called for more research in composition studies that is replicableaggregable, and data-supported (RAD), noting that most of the articles professionals see published in the field’s flagship journals do not meet these criteria. Since Haswell made this case, he has often been cited but much less often taken up, especially on his appeals to replication and aggregation in research. My study made use of Reid et al.’s (2016) RAD design, which explicitly invites replication, in order to 1) corroborate the results of the their study in a different time at a different site, and 2) build on the methodology Reid et al. had constructed in order to also ask new questions.

While my survey of academics used a modified list of text types drawn from Reid et al (2016), my survey instrument diverged from their questionnaire in a number of significant ways. First, I asked not only what kinds of texts faculty produced but also how frequently and in what areas of their work. This entailed much more intensive and qualified reporting on the part of respondents than did Reid et al.’s more streamlined checklist. As a consequence, I chose not to solicit much in the way of qualitative open-ended responses like Reid et al. did, including only one optional field for additional comments at the end of the questionnaire. Keeping the survey lean with a relatively low level of participant investment required (average response time was only eight minutes) enabled me to get a large sample (see below). One final and major difference in methodology is that I asked only one general question about assignments in the questionnaire for faculty. However, unlike Reid et al’s study, my study also featured a companion questionnaire distributed to students across the disciplines in order to gain insight into what students were being assigned. The student questionnaire asked participants to catalogue the kinds of texts they produced for academic contexts, using a very similar list of text types as the one distributed in the survey of academics so that the data from both surveys could be analyzed alongside one another.

Participants and Sampling

Both questionnaires were developed with Qualtrics and distributed to different university populations.

Survey of academics

The first survey sought to recruit participants who were facultystaff, or graduate students. These populations were grouped together because they are all considered university experts in some capacity, and most in these populations have a role in shaping undergraduate education in some way, whether that is through teaching or program administration. (There are faculty, staff, and graduate students who play little to no role in undergraduate education as well; this survey did not attempt to tease out this distinction.)  Another practical reason for grouping these populations was sampling: The main campus-wide communications channels for faculty was also accessible to graduate students and staff. Since staff members who were not faculty or graduate students made up a very small percentage of respondents (see below), and since all respondents who identified as staff still affiliated with an academic discipline, the remainder of this chapter uses the term “academics” to refer to participants in this survey.

The questionnaire for academics was distributed electronically several times during the 2020-2021 academic year through announcements in university digests for all faculty and staff as well as through a similar digest for graduate students. The questionnaire was also distributed during writing-intensive course orientations, which are hosted regularly by the writing center and include both faculty and graduate student participants. The recruitment announcement was titled “How do you write?” to avoid skewing the data toward primarily those who are interested in multimodal communication. No incentive was provided for responding to the questionnaire. Despite this, the survey yielded a large number of responses and a diverse population of respondents: 237 respondents completed at least part of the questionnaire. Of these, 155 completed the full questionnaire. Incomplete responses were excluded from this analysis.

  • 63.2% of respondents (n=98) were full-time or emeritus faculty
  • 6.5% of respondents (n=10) were part-time faculty
  • 28.4% of respondents (n=44) were graduate students/postdocs
  • 1.9% of respondents (n=3) identified as full-time staff, and only one of these did not note a research or teaching role attached to this position

Participants represented 43 distinct disciplines. In order to better compare findings to Reid et al.’s (2016) results, I sorted disciplines into two broad categories: humanities/social science (HSS) and STEM. This process categorized 48% of participants (n=74) as affiliated with STEM fields and 52% or participants (n=81) as affiliated with humanities/social science fields.

Survey of students

In December 2020, a link to the second questionnaire was emailed to all students who made an appointment with the writing center during fall 2020. Participants had the option to be entered into a drawing to win a $25 gift card. A total of 209 students participated in the survey. Participants represented 55 different majors, with 50% of participants in STEM fields, 43% in humanities or social science fields, and 7% undecided.

  • 53.6% of respondents (n=112) were first-year students
  • 13.4% of respondents (n=28) were sophomores
  • 13.9% of respondents (n=29) were juniors
  • 11.0% of respondents (n=23) were seniors
  • 8.1% of respondents (n=17) were graduate students

Analysis

Quantitative questionnaire responses were analyzed mostly for frequency and through descriptive statistics. I used SPSS to explore data and calculate statistics. I subsequently visualized salient results and present these in the findings section.

Qualitative data were not gathered in a systematic way; only some participants chose to contextualize their quantitative responses using an open-ended comments field at the end of the questionnaire. As a result, I did not systematically analyze qualitative data or try to make generalizable conclusions from this data. I do occasionally draw on qualitative responses as examples as part of my reflection on study findings in the discussion section.

© Gabriel Morrison, 2021

css.php