The content of a specified grade column is used to create a new column of the corresponding ranks. Rankings are based upon the highest grade(s) receiving the rank of 1, the next highest grade(s) a rank of 2, etc.
The values in one column are used to calculate letter grades which are subsequently placed into another column of choice. If the perfect score is specified in the input column then the letter grade is based upon a percentage calculation of this perfect value otherwise the perfect score is assumed to be 100. Mapping is done without rounding the grade i.e., 79.9 is mapped to a B. If you wish to see letter grade distributions based upon rounded grades, create a new column of rounded grades using the Round tool. Note: fractional grades cannot be submitted to the Registrar and if they are present they are rounded when producing Registrar grade submission print reports or email files. Letter grades are allocated as follows:
F < 50%
D >= 50% and < 60%
C >= 60% and < 70%
B >= 70% and < 80%
A >= 80% and < 90%
The values in one column are used to calculate a “PASS” or “FAIL” grade which is placed into another column of choice. If the perfect score is available from the input column then the pass grade is based upon a percentage calculation of this perfect value otherwise the perfect score is assumed to be 100. Assignment of the PASS/FAIL is done without rounding the grade, for example 49.9 is mapped to a FAIL. If you wish to see pass/fail grade distributions based upon rounded grades, create a new column of rounded grades using the Round tool and then use that column to generate the pass/fail column. Pass/Fail grades may be submitted to the Registrar provided approval for this format has been pre-arranged with the Registrar.
There is also some flexibility for affecting how these coded assignments take place and how they are recorded. You may choose either the code PASS or SAT to appear for grades which are equal to or greater than a specific value selected from a list of values (default is 50). For example, a PASS might be considered 60 or 65 in some programs and this can be selected by clicking that appropriate value. If you wish to code honours level grades as HON this can also be done by selecting from the list box the appropriate value which defines honours (default is not to distinguish HON from PASS or SAT). Use of these codes for electronic submission are restricted to certain faculties so it is advised that you consult the Registrar for approval.
The content of a specified grade column is used to create a new column of corresponding percentile ranks. Percentile ranks range from 0 to 99. A percentile rank is usually used in reporting results of norm-referenced tests. For example, if a student has a percentile rank of 80 on a test they are said to have a test score higher than the scores for 80% of the norm group.
Percentile Rank = (cfl + .5(fi))/N * 100%
cfl is the cumulative frequency for all scores lower than the score of interest.
fI is the frequency of scores in the interval of interest
N is the number of non-missing values in the column
A grade column can be defined as the simple sum of a student's highest grades taken from a set of assignments. For example, columns A, B, C, D may be used to record the marks for four assignments given throughout the year. Column E can be made to receive the highest sum of 3 scores taken from columns A, B, C, D. Warning: The Best function assumes that all of the values in the input columns have the same perfect score and selects values based purely on which are the largest. If there are not enough grades for a particular student, i.e., NGR, empty or EXCU cells, then the result is set to NGR. The EXCU code will substitute as a result for NGR for a student if an EXCU code appears in at least one of the columns in the set for Best Sum.
The contents of selected columns are used to calculate varying averages, where an average is equal to the sum of a student’s non-missing values adjusted for perfects from specific columns divided by the number of these non-missing values. More specifically, when all the grades are present for a student the function EAver(A,B,C) is mathematically equivalent to (A/Aperfect + B/Bperfect + C/Cperfect)/3*EAver(perfect). In words, each of the scores in columns A through C are taken as fractions based upon their specific perfect values (100 assumed if not stated), summed and divided by three. This resulting fraction is converted to a value which is based upon the perfect value designated for the EAver receiving column.
If a grade is missing, eg. NGR then the entire calculation results as NGR. The following exception applies when code EXCU (Excused) is recorded for a grade. This will cause that value to be removed from the average calculation, i.e. the sum and divisor of the calculation are adjusted to use the remaining non-missing scores to calculate the average. For example, an Excused Averages calculation using columns (A,B,C) where some students have EXCU recorded for some of their grades (medical reasons, family death, etc.) will have their averages calculated based on three, two, or perhaps just 1 column.
A course syllabus states that a student's final grade will consist of four lab exercises (20%), six tests (35%) and a final exam (45%). The labs will be averaged together to determine the lab component and the tests will be averaged together to determine the test component. An ideal situation might have this final grade calculation expressed as 20%*(L1+L2+L3+L4)/4 + 35%*(T1+T2+…T6)/6 +45%*Exam. However, it is inevitable that some students will have legitimate excuses (medical, family death, etc) for missing a lab or test. The instructor would like to excuse one or more labs and tests for those students who have legitimate excuses and use their remaining scores to calculate their average lab, test components for their calculations. This can be accomplished in MMS by using the Excused Averaging function in conjunction with the special grade code EXCU for labs and tests that have students that meet the criterion for excused requirements.
An equation may be defined for a column. Expressions may include variable references to columns A-AZ, constants, and the operators + (addition) - (subtraction) * (multiplication) / (division) ^ (exponentiation) and ( ) parenthesis.
The content of a specified column is used to create a new column of grades which have been standardized to a mean of x (default 65) and a standard deviation of y (default 10). The new grade column is calculated by using the individual Z scores from the original column and transposing those onto the new mean and standard deviation.
New score = sd * Zi + mean
Zi is the Z value for a student calculated from the original column mean and standard deviation
sd is the selected standard deviation
mean is the selected mean
The content of a specified column is used to create a new column of grades which have been normalized onto a mean x (default 65) and standard deviation y (default 10). The new grade column is calculated by creating percentile ranks from the input column and mapping those percentiles to the normal distribution to obtain Z values. These Z values are then used to create the new scores based upon the specified mean and standard deviation.
The Pearson’s Correlation Coefficient or product moment correlation is independent of the measurement scales of the two columns being examined. In other words, one column of grades may be test scores marked out of 100 while the other grade column is an assignment marked out of 15. Pearson’s r provides a measure of the strength and direction of a linear relationship between two sets of scores. Pearson’s r coefficient ranges from –1.0 to + 1.0. Pearson’s r helps to answer the question “do students who do well on one column of grades typically do well on another column of grades?”. If the Pearson’s r is positive and relatively large (greater than 0.5), then this suggests that a positive linear relationship exists such that high scores on one test are good indicators for large scores on the other test. If the r produced is negative and relatively large (less than –0.5), then this suggests that a negative linear relationship exists such that high scores on one test are good indicators for low scores on the other test. It is unlikely that a negative Pearson’s r should be observed in a normal teaching and testing situation.
The t-Test is usually applied to random samples from independent groups. However, in the case of marks management we are interested in testing the difference between means obtained from matched pairs of the same individuals. In these circumstances we are not likely to see very large differences because the means of the two groups are inherently correlated. The correlated t-Test provides a basis for testing correlated groups because it has a factor in its formula that corrects for the built-in correlation.
t = Sum(D)/SQRT((N*Sum(D^2)-Sum(D)^2) / (N-1))
D is the column of grade differences, e.g. column B – column A
N is the number of non-missing paired grades from column B and A
The t-Test procedure will calculate and display a t value and its associated degrees of freedom which must then be used to compare to a table of t values in order to determine, at your desired level of significance, whether or not the null hypothesis may be rejected. The null hypothesis is usually stated as Ho: There is no difference in the means from the two columns examined. If the calculated t value exceeds the published table value then the null hypothesis may be rejected which in turn suggests that a significant difference exists between the two grade columns under examination. Whether that difference is an improvement or a decline depends on the sign of the t value. It is important to understand that the sign is determined by the difference calculation, e.g. column B – column A which is always fixed in its ordering such that the lower lettered column is always subtracted from the higher lettered column.
Warning: The t-Test requires that the scale of measurement be the same. In other words, the scores appearing in the two columns being subjected to a t-Test must have the same perfect score, e.g., a t-Test between an assigment maked out of 15 with an exam marked out of 100 will yield a nonsense value.