Division of Academic Affairs

» Current Students | » Faculty & Staff | » Future Students | » Parents | » Alumni, Donors & Friends | » Athletics | » Employment | » Give Today!

photo

Frequently Asked Questions

Frequently Asked Questions about General Education Assessment

Why does the faculty need to do this?
Assessment of student learning in general education (GE) courses is required in order for us to determine whether or not our students have mastered the content and skills of a liberal education as defined by our faculty. Although some GE assessments are mandated by the SUNY Board of Trustees, the primary reason why we do it is because it provides criterion-referenced data that informs our curriculum and instructional decisions. The GE Board encourages you to structure assessments in your GE courses so that you obtain information about student learning that will be useful to you in improving teaching, learning, and the curriculum.

What exactly does our campus need to do?
Our campus has developed a Campus-Based Assessment (CBA) plan and a Strengthened Campus-Based Assessment (SCBA) plan and they were approved by the General Education Assessment Review (GEAR), a SUNY-wide group (see below).
Annually, we submit a GE Summary Report to GEAR in which we discuss improvements we have made as a result of the previous assessment of GE, major findings of the current round of assessments, and actions to be taken in addressing these assessment findings. While we are no longer (as of spring 2007) required to report data to GEAR on the percentage of students “exceeding,” “meeting,” “approaching,” or “not meeting” each of the Board of Trustees GE objectives, GEAR requires that we keep these data on our campus. (The data that we keep is a summary of the overall percentages for all students assessed).

Can you provide more information about the General Education Assessment Review (GEAR) group?
The General Education Assessment Review (GEAR) was established in Spring 2001 upon the recommendation of the Provost’s Advisory Task Force on the Assessment of Student Learning Outcomes, and was formed jointly by leadership from the University faculty Senate, the Faculty Council of Community Colleges, and System Administration. The GEAR group’s primary goal is to work with the 57 SUNY campuses with general education programs as they develop and implement their campus-based plans for assessing student learning outcomes in general education, following the guidelines contained in the Task Force report. GEAR is charged with providing initial and ongoing review of campus-based general education assessment plans. The group’s review of the general education assessment plans focuses on the campus’ assessment processes and procedures that establish a culture of program improvement and not on the evaluation of the campus’ program or faculty. GEAR also has responsibility for conducting activities that facilitate the development and refinement of campus-based general education assessment plans.

What is Campus Based Assessment (CBA)?
The campus-based assessment process refers to the assessment plans that we have in place for conducting the assessment of the 12 student learning outcomes areas (i.e., the 10 Knowledge and Skills areas combined with the 2 Competencies) that make up the SUNY General Education requirement (GER). The Knowledge and Skills areas are: Mathematics, Natural Sciences, Social Sciences, American History, Western Civilization, Other World Civilizations, Humanities, The Arts, Foreign Languages, and Basic Communication [Written]. The Competencies are: Critical Thinking [Reasoning] and Information Management.

What is Strengthened Campus-Based Assessment (SCBA)?
Strengthened Campus Based Assessment refers to modifications required in our New Paltz campus based assessment plan as a result of the SUNY Board of Trustees’ 2004 Resolution.

How does our CBA plan differ from our SCBA plan?
These plans differ in only two ways. First, under SCBA, our campus must use externally-referenced measures to assess three learning outcome areas: Mathematics, Critical Thinking [Reasoning], and Basic Communication [Written]. Second, our campus must assess students’ perceptions of the campus’ academic environment—more specifically, students’ engagement in academic activities on our campus.

Our campus will use the rubrics and standards developed by the SUNY discipline-based panels in assessing the three student learning outcome areas. We will administer the National Survey of Student Engagement (NSSE) to assess students’ engagement in academic activities on our campus. The NSSE is nationally-normed and has been used on hundreds of campuses.

Will we continue to administer the Student Opinion Survey?
The Student Opinion Survey (SOS) was administered in spring 2006 and is scheduled for spring 2009. Between now and then, System Administration will review the SOS for the purpose of eliminating overlap between it and the National Survey of Student Engagement. System Administration will continue to administer the SOS in the future – every third year, as is now the case – since it provides useful information regarding students’ perceptions of campuses’ facilities and services.

Can we fold “Campus Based Assessment” and “Strengthened Campus Based Assessment” into one, unified, GE assessment plan?
Yes, we can combine the elements of our CBA plan into our SCBA plan. We must, however, use the SC-BA objectives and rubrics when we assess Mathematics, Basic Communication [Written], and Critical Thinking.

How often is GE assessment performed?
Each area is assessed in a three-year cycle. However, Critical Thinking and Information Management are assessed in concert with the content area. The assessment schedule for our campus is as follows:

Spring 2015: American History category, Other World Civilizations category, Basic Communication--Oral category, and courses in these content categories that were approved with the Critical Thinking competency. In addition, Ethical Reflection, Effective Expression--Oral, and Information Management across all content categories will be assessed.

Spring 2016: Art category, Foreign Languages category, Basic Communication--Written category, Math category, Diversity category, and courses in these content areas that were approved with the Critical Thinking competency. In addition, Effective Expression--Aesthetic and Effective Expression-Written will be assessed across all content categories.

Spring 2017: Humanities category, Natural Sciences category, Social Sciences category, Western Civilization category, and courses in these categories that were approved with the Critical Thinking competency.

 

An Important Reminder: Because Mathematics, Basic Communication [Written], and Critical Thinking are under SCBA, they must be assessed using the rubrics and standards developed by the SUNY discipline-based faculty panel. The links for these rubrics and standards are at: http://www.newpaltz.edu/GE/criticalthinking.pdf; http://www.newpaltz.edu/GE/mathematicsrubric.pdf; and http://www.newpaltz.edu/GE/WritingRubrics.Final.pdf.

(This schedule repeats in a three-year cycle).

How are students selected to have their work assessed?
We have some latitude on this. The requirements we must meet are: All students taking courses in the areas that are being assessed must have the same probability of being assessed. All courses and sections must be ready to have students in them assessed. In other words, all sections in an area must have an assessment plan in place by the start of the semester. At least 20% of all students enrolled in courses in the area must be assessed.

What is done in most areas is the following. The Office of Institutional Research and Planning (OIRP) uses a stratified random sample methodology. The principles are as follows: Every section has an equal opportunity of being chosen (except in those areas where there will be a census, i.e., 100%, rather than a sample). The sample will be a true random sample, in that results are generalizable to the entire population. The sample represents the whole. If there are exigencies or special situations in an area where the stratified random sample needs modification, OIRP will consider an adjustment.

How do students have their work assessed?
This is usually done based on the assignments that you develop for your course.

How are these assignments evaluated?
They are assessed by the instructor, who refers to rubrics that clarify what levels of students’ performance constitute “exceeding,” “meeting,” “approaching,” or “not meeting” each objective. The articulation of clear standards is important, since best practice requires that there be a mechanism to insure inter-rater reliability. One of the criteria that GEAR uses to evaluate our assessment plans is a provision for inter-rater reliability. Therefore, those provisions must be included in every course plan.

What is inter-rater reliability, and why is it important?
Consistent assessment yields the best data. For this reason, we (i.e., the GE Board) want to know what steps you are taking to minimize variability between how different instructors evaluate the assignments. Inter-rater reliability refers to the steps we take to insure that different faculty raters assign the same score for performance on the same assignments. Some acceptable ways to insure inter-rater reliability are to use written rubrics to which those evaluating an assignment will refer and concur. Another useful method is to include a third rater when there is some concern about the accuracy of the assessment.

What is a norming session?
A norming session is done to match the standards that multiple raters would use. The idea is that most assignments would be evaluated by only one rater, but that the raters would have first conferred on the evaluation of some selected assignments. This initial conference where the raters compare their evaluations is a norming session. GE Board members are available to facilitate norming sessions.
The purpose of this norming session is to determine inter-rater reliability and validity. Groups of faculty engage in a norming session to ensure that their application of a rubric to assess an assignment accurately uses the criteria to interpret a student’s performance. Assignments may be evaluated by only one rater, but if there is a need for another rater, that rater’s assessment should come close to concurring with the first rater’s as a result of norming. If there is a marked discrepancy in the assessment between two raters, a third rater may be required to assess the assignment and, if necessary, suggest a reevaluation of the rubric.

Will data from the GE assessments be used to evaluate instructors?
No, this is not the purpose of the GE assessments. Instructors are strongly encouraged to use the assessments to gain insight into how well their students are learning. The data are not used to evaluate instructors. To insure that it is not used in this way, all GE assessment data are aggregated. It is not be possible to recover information about individual instructors.

The Associate Provost will take the raw data from instructors (via my.newpaltz.edu), and only release aggregated data. (Departments will be given the aggregated data for their own courses). Individual instructors must keep a copy of their GE course assessment data. They are to use the assessment data to improve the course the next time they teach it.

What procedures do we have in place to assure the responsible and confidential use of assessment data?
See previous response. The Office of the Provost does not report data at the individual faculty level. Instead, we aggregate data for reporting purposes. We keep data in a secure online environment that is password protected. We take seriously the responsibility to ensure that assessment data are not easily available in such a form that it could be linked directly to individual faculty. While every attempt is made to keep the data confidential, we recognize that our campus might be required to provide access to documents in certain situations (e.g., in response to a Freedom of Information Act Law request). Because such a request is possible, it would be misleading to imply that total confidentiality can be guaranteed.

How does one pick which assignments to assess?
This depends on the objectives which are to be assessed. With planning, different parts or facets of a single assignment will serve to assess all objectives. Depending on your field, it may be best to assess an essay, a presentation, or particular questions on an exam. The assignment may occur anytime during the semester.

What information is needed in a departmental GE assessment plan?
This plan consists of a cover sheet, where the department chair gives some departmental information and lists all the course/area combinations that might need to be assessed. See http://www.newpaltz.edu/GE/assessdeptform.pdf for the departmental assessment form. Course assessment plans should be attached for all of the course/area combinations listed. The plans should include descriptions of assignments and their respective rubrics. See http://www.newpaltz.edu/GE/assessinstructor.pdf if the course assessment form is being submitted directly by the instructor teaching the course or http://www.newpaltz.edu/GE/assesscourse.pdf if submission is instead being done by the chair of the department.

What information is needed in a GE course assessment plan?
The GE course assessment plan should include the GE course title and course number, the GE category/competency (e.g., Humanities, Natural Sciences, Critical Thinking) to be assessed, the course objective(s), descriptions of assignments and a rubric.

To insure that all students have their work assessed, the assignment(s) should be required. Please be careful to design assignments which will provide an effective measure of how well students meet the objectives being assessed. The GE Board as well as GEAR will check for this quality, called face validity.

Attach a rubric to show that the objectives will be assessed in a uniform way. A set of sample student responses should also be given. For each objective, there should be a sample response that "exceeds" the objective, a sample response that "meets" the objective, one that "approaches" the objective, and one that "does not meet" the objective. Even having a rubric leaves some items open to interpretation. State how your department will insure that the rubric is used uniformly. (You should demonstrate that you are making a reasonable effort to insure inter-rater reliability).

What is a rubric?
In the context of GE assessment, a rubric is a document that spells out the criteria that a student’s work should meet to "exceed," "meet," "approach," or "not meet" each objective (or part of objective). Rubrics are often presented as tables, with rows for each objective, and columns for “exceeds,” “meets,” “approaches,” and “does not meet.”

Which objectives should be assessed in an area?
One content or competency area may have several sets of objectives. The C-BA and SC-BA objectives are provided by the SUNY Board of Trustees (BoT). Another set is designed by our New Paltz faculty and you have your own course objectives. We are required to report GE assessment results for the BoT objectives. Therefore, we must report assessment data for the BoT objectives separately.

What are the Board of Trustees and New Paltz GE objectives?
The BoT and New Paltz objectives can be found in the table at the end of this document.

Note: New Paltz includes objectives for the Diversity knowledge and skill area and the competencies Effective Expression-Aesthetic, Effective Expression-Oral, Effective Expression-Written and Ethical Reflection whereas the Board of Trustees does not.

What GE assessment information is reported?
Our campus is required to report GE assessment results based on the Board of Trustees objectives.

How does GE assessment relate to program assessment?
Each academic department on campus must do program assessment, with programs usually defined as majors and concentrations. It is up to each department to determine what aspects of their programs they want to assess and how they want to assess them each year. However, departments that contribute to GE should consider that as part of their program offerings. As with GE assessment, departments are expected to use the results of their program assessment to improve programs and student learning.

How is a department supposed to “close the loop” and use assessment results?
The purpose of assessment is improvement of student learning, so faculty need to discuss their results with other faculty to reflect on how they might more effectively meet this goal. Faculty have always made course and program changes to improve student learning; assessment just makes the process more transparent and systematic.

How can results be useful and meaningful if reported only in the aggregate?
Data reported in the aggregate often do not provide the basis for meaningful discussions about the curriculum. Consequently, we ask that you keep a record of your own GE course assessments and use them to inform your conversations with others (e.g., department chairs and others who teach in the same area) about how you might improve your GE course and student learning.

» BOT and SUNY New Paltz GE Learning Outcomes *

* Download Acrobat Reader Now!
ADA: convert PDF to text