--- title: "Grading Exams" author: "Alejandro Gonzalez Recuenco" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Grading Exams} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This vignette describes how to grade an exam using the script found on the `exec/` folder. You can find where this script is found by typing on an R terminal. ```{r echo=TRUE} system.file("exec", package = "TexExamRandomizer") ``` ## Format of the student's answers When you collect the responses from the student, you should follow the following prescriptions: (@) The actual answers to each question should be an integer number, (1 being the first choice). They should be written in the order that they are presented in the student's exam. The columns should be called "Question 1", "Question 2", etc. (Or "Q 1", "Q2", etc). * If a certain question is left unanswered, give it the value 0 in the table. (@) If you want to give extra points, add a column called "Extra Points" with the added points. (@) You require a column called "Version", where the version number for each exam is written. (@) All other columns will be placed on the output "_Graded.csv" table. ```{r echo=FALSE} library(knitr) responses.file <- system.file("extdata", "ExampleTables", "ExampleResponses.csv", package = "TexExamRandomizer") responsesTable <- read.csv(responses.file, header = TRUE) responsesTable$ExtraPoints <- sample.int(10, size = nrow(responsesTable), replace = TRUE) kable(responsesTable[1:5,], caption = "Example responses format") ``` ## Format of the answer sheet When you created the exams by using the `examrandomizer` script, it will generate a `fullanswersheet.csv` file. When grading, it assumes that the "correctTag" and the "wrongTag" are respectively "choice" and "CorrectChoice". If that is not the case in your structure, simply change the names to "choice" and "CorrectChoice" of those columns. ```{r echo=FALSE} library(knitr) answer.file <- system.file("extdata", "ExampleTables", "ExampleAnswerSheet.csv", package = "TexExamRandomizer") answersTable <- read.csv(answer.file, header = TRUE) kable(answersTable[1:5,], caption = "Answer sheet, the last two rows must be named \"choice\" and \"CorrectChoice\"") ``` (But if you really want to change that assumption, you can access the script and edit `ASHEET_COLCORRECT` and `ASHEET_COLINCORRECT`) ### What if I wrote some questions wrong and I noticed too late? I already printed the exams If you realize one question is incorrectly written, but it is too late to rewrite the exam, find the original answer sheet in the `fullanswersheet.csv` (The rows with version number 0). Then, remove the lines from the answer sheet that refer to the question you want to ignore. (Keep a backup of the answer sheet, just in case). When the students answer the exam. Tell them to write *any* answer in that question, it will be ignored by the program. # Grading the exam You can specify directly the responses from the students (`--resp`) and the answer sheet (`--answer`). gradeexamrandomizer --resp --answer If you have both of those files on the same directory and you make sure their name is somehow similar to "`responses*.csv`" and "`answer*.csv`", you can write the shorthand version. gradeexamrandomizer --dir The output will be two csv files in the same directory where the students' responses file is found, called `*_Graded.csv` and `*_Stats.csv`. ```{r echo=FALSE} library(knitr) graded.file <- system.file("extdata", "ExampleTables", "Example_Graded.csv", package = "TexExamRandomizer") gradedTable <- read.csv(graded.file, header = TRUE) kable(gradedTable[1:5,], caption = "Output graded file") ``` ```{r echo=FALSE} library(knitr) stats.file <- system.file("extdata", "ExampleTables", "Example_Stats.csv", package = "TexExamRandomizer") statsTable <- read.csv(stats.file, header = TRUE) kable(statsTable[1:5,], caption = "Output stats (It simply counts how many times each choice was chosen by every student)") ``` ## How is the grade calculated In the output `*_Graded.csv` table, as you can see in the example above, there will be 5 rows added to the output: * `addedPoints`: Points added to each student, by adding a column in the responses called "ExtraPoints" * `addedAllPoints`: Points added to all students. This points are added differently, like if the exam had `addedAllPoints` extra questions, and all students got them correctly. (Which right now will always be zero with the current script). * `maxGrade`: Maximum number of answers in that exam. If you had removed some questions that you want to remove, but that question is not found on all versions, then some exams will have a greater than * `Grade`: The number of correct answers on the student exam. (If there are more than one correct answer on each question, ti won't be able to detec all of them, only whether the student wrote a correct one or not) * `Grade_Total_Exam`: The total grade, after scaling. It assumes the maximum grade of the exam is a 100%. For generailty we will denote it `MAX`. (To change it, use the option `--max `) $$Grade_{Total Exam} = \frac{Grade + addedAllPoints}{maxGrade + addedAllPoints} \cdot MAX + addedPoints$$