Sampling Plan Updates
For some courses, we were not able to get the stratified sample size we wanted, due to a variety of reasons. We will be unable to generalize results for those courses, but our overall sample size for generalizing to the student body is valid.
We have identified problems with the evidence-gathering process, and will improve the process in 2015-2016 by:
- Selecting courses for the AIM process in the spring
- Providing a timeline for fall evidence collection, in the spring
- Clearly defining the process for evidence collection, in the spring
- Selecting sample of student IDs immediately after the 14th day
- Oversampling to account for those who withdraw
Sampling Plan for the AIM Process
It is important to select a sample that is truly representative of our student population. Therefore, we select a stratified random sample from each course selected for the AIM process.
By selecting a random sample in this way, we ensure all students, regardless of degree type, have an equal chance of being selected. We also ensure our random sample is proportionately similar to the actual student population.
For those of you who are statistically inclined, we set our confidence level at 90%, and our margin of error at 10%. This allows to say we are 90% confident our sample results fall within +/- 10% of the true population scores. We chose these parameters to keep our sample sizes manageable.
*In 2013-2014, faculty identified shortcomings with our sampling method. There were concerns it was not representative of our entire population. In response, we developed this new method which was implemented in 2014-2015.
There is one sample/evidence liaison assigned for each general education division. These liaisons work with Institutional Effectiveness and lead faculty to gather evidence based on the random selection of student IDs. The 2014-2015 liaisons are listed below:
|Math and Science||Shay Bean|
|Humanities and Fine Arts||Allison Fetters|
|Social and Behavioral Science||Dan Rose|