| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Types of Assessment Authoring Software

This version was saved 17 years, 4 months ago View current version     Page history
Saved by PBworks
on November 5, 2006 at 4:40:22 pm
 

Types of assessment authoring software

 

QuestionMark Perception

 

QuestionMark Perception is established commercial assessment software, widely adopted by educational institutions. QuestionMark has widened its provision from simple activities based on drag and drop, multiple choice and fill in the blank to accommodate spoken response and simulation.

• Drag-and-Drop: the participant clicks and drags up to ten images into position. The feedback and score is dependant upon the final position of the images.

• Essay question: the participant answers by typing up to 30,000 characters of text. Perception’s Scoring Tool enables grading essay questions within assessments by using customized rubrics. You may define what is right or wrong in advance by entering a list of acceptable answers or print out a report of the responses for manual grading. The logic can also allow scoring based on the presence or absence of keywords or key phrases. This question type is also used to solicit opinions or suggestions on a particular subject.

• Explanation screens: insert text or graphics for the participant to view prior to answering a series of questions.

• Fill-in-the-blank: the participant is presented with a statement where one or more words are missing and completes the missing words. The score can be determined from checking each blank against a list of acceptable words and can checked for misspelled words.

• Hotspot: a participant clicks on a picture to indicate their choice. Depending upon their choice, certain feedback and grades will be assigned. A graphics editor is provided to simplify specifying the choice areas.

• Likert scale: the participant selects one of several options such as "strongly agree" through "strongly disagree" that are weighted with numbers to aid analysis of the results.

• Matching: two series of statements/words are presented and the participant must match items from one list to items within the other list.

• Multiple choice: the participant selects one choice from up to 40 possible answers. There is no limit to the length of each answer.

• Multiple response: similar to multiple choice except the participant is not limited to choosing one response; he/she can select none, one or more of the choices offered.

• Matrix: this question type presents several multiple-choice questions together where the participant selects one choice for each statement or question presented. This question type is used to cross-relate responses from a single item.

• Numeric questions: a participant is prompted to enter a numeric value, and this may be scored as one value for an exact answer and another score if the response is within a range.

• Pull-Down List (selection question): a series of statements are presented and the participant can match these statements with a pull-down list.

• Ranking (Rank in Order): a list of choices must be ranked numerically with duplicate matches not allowed.

• Select-a-blank: the participant is presented with a statement where a word is missing; words can be selected from a pull-down list to indicate their answer.

• True/False: the participant selects "true" or "false" in response to the question.

• Word response (text match): the participant types in a single word or a few words to indicate their answer. You define right or wrong words or phrases in advance by entering a list of acceptable answers. The grading logic can also allow scoring based on the presence or absence of keywords or key phrases and check for misspellings.

• Yes/No: the participant selects "Yes" or "No" in response to the question.

• Macromedia Captivate Simulations: Perception supports an interface with Macromedia Captivate that allows subject matter experts to create simulations that can provide scoring information for multiple interactions and have the results recorded within the answer database.

• Spoken Response: Using the Horizon Wimba Connector you to record a participant's voice as the answer to a question. Scores for spoken responses can be processed along with other test scores using Perception’s reporting tools.

• Java: Perception supports an interface with Java that allows programmers to program customized items using Java and have the results recorded within the answer database.

• Macromedia Flash: Perception supports an interface with Macromedia Flash that allow programmers to program customized items using Flash and have the results recorded within the answer database.

 

Hot potatoes

 

Software that enables the tutor to create interactive multiple-choice, short-answer, jumbled-sentence, crossword, matching/ordering and gap-fill exercises for the World Wide Web. Hot Pototoes has been widely available to the educational community for many years. http://www.halfbakedsoftware.com/

 

Quandary

 

Another application developed by Half-Baked Software in Canada, Quandary allows the tutor to create and deliver Web-based Action Mazes. An Action Maze is a kind of interactive case-study; the user is presented with a situation, and a number of choices as to a course of action to deal with it. On choosing one of the options, the resulting situation is then presented, again with a set of options. Working through this branching tree is like negotiating a maze, hence the name "Action Maze". http://www.halfbakedsoftware.com/

Action mazes can be used for many purposes, including problem-solving, diagnosis, procedural training, and surveys/questionnaires.

 

''QML – quest markup language''

 

A php-based facility to allow the creation of simple quests or fantasy-based scenarios to end users.

http://questml.com/

 

Recent project work on assessment technology

 

There has been a recent emphasis on sharing, sequencing and interoperability of assessment software. E-assessment has recently stabilised into generally using the types of activities listed under the individual types of software. Newer innovations in assessment such as voting systems have still great potential for pedagogical exploration. With recent UK government funding tending towards integrated learning environments, and the institutional VLE/MLE, there have been a number of projects relating to item banks, sharing items between item banks and most recently the use of web services to share and search item banks. There are a number of examples of plagiarism detection software in relation to student programming courses, that can often serve as debugging assistants for learners. Software projects aiming at disseminating best practice for institutions in using online assessment systems are also present.

In terms of e-assessment recent developments have generally been of secondary relation, rather than any new innovation in assessment. One initiative, in particular, is becoming a cornerstone of new research. The E-Framework being developed jointly by JISC in the UK, DEST in Australia to create service-orientated architectures for e-learning has increased project activity relating to assessment related web services, as would be expected. FREMA, the reference model project related to mapping the assessment domain, projects and services currently available shows that the interrelation between assessment projects. FREMA offers a useful modelling of the existing domain, but has yet to detail the pedagogical or research components of the projects and software mentioned. http://www.frema.ecs.soton.ac.uk/

Subject areas in online assessment–

Online assessment in generally limited to generic solutions outlined by software such as Hot Potatoes, Quandary and QuestionMark Perception. There has been greater effort in domains such as mathematics, science and computer programming, which reflects generally the overall bias of e-learning towards science-related projects. However, there are indications that online assessment is about to undergo a re-examination. (See REAP project) http://www.reap.ac.uk/about/index.htm. Whilst there appears to have been progress made in Maths education, there are still many subject domains in need to individual consideration, for example, language learning needs to provide ways for assessing speech in terms of pronunciation, grammar and syntax, vocabulary and use of verbs in addition to writing, reading and comprehension exercises.

General

As can be seen in the table relating to recent assessment projects in the UK, there are a number of general projects relating to generating item banks, the searching of item banks and good practice in using online assessment. Below are 2 projects relating to Mathematics education which address issues relating to automated testing systems, and

Maths

ActiveMath http://www.activemath.org/

ActiveMath is a stable, web-based, multi-lingual, user-adaptive, interactive learning system for mathematics. It is a Semantic Web application with a number of services.

It employs technology for enhancing learning with scaffolding and instruction as well as with interactive and constructivist elements. The system provides an open architecture, knowledge representations, and techniques for the Web-presentation of interactive mathematics documents (hyperdocuments as well as printed material, slides as well as hyperbooks).

It is based on semantically OMDoc-encoded learning objects that are annotated with pedagogical and other metadata. Its modular architecture combines components such as a learner model, a course generator, a knowledge base and several integrated service systems. The course generator uses a formalization of the pedagogical knowledge for assembly of individual 'books' according to learner's goals, preferences and knowledge.

ActiveMath's interactive exercise subsystem clearly separates components for evaluation and feedback. This adds flexibility, generality and transparency to exercises and supports action tracing and diagnosis. ActiveMath includes cognitive tools such as a lexicon, an interactive concept map, open student model, and notes. The lexicon facilitates semantic and fuzzy search for concepts and browsing their dependencies. Notes allow writing private and public notes which are then attached to concepts.

Special types of exercises and feedback are being developed to support the learner to overcome misconceptions and to improve her self-regulation and reflection.

AiM – Assessment in Mathematics

AiM is a system for computer-aided learning and assessment in mathematics, with an emphasis on formative assessment. It is built on top of the symbolic mathematics program Maple, and thus has a rich understanding of mathematics built in. This allows it to randomize questions in complex ways, detect correct answers given in unusual forms, trap common errors and give intelligently tailored feedback and so on.

Conventional on-line assessment systems are very limited in the type of questions that can be set. Questions where there is a large variety of correct answers or where the correct answer can be given in many different forms can not be marked with conventional assessment systems. However it is exactly this type of question that we use a lot in science teaching. There are usually many ways to write a formula, for example.

The solution lies in using a computer algebra system to power the assessment system. The built-in knowledge of mathematics that the computer algebra system provides opens up entirely new possibilities to computer-aided assessment, of which we will now present a few examples:

 

• Equivalent answers

AiM can mark questions where the correct answer can be expressed in many different forms. In mathematics this is the rule rather than the exception because of algebraic equivalence between expressions, for example (x+1)2 = x2+2x+1. AiM can identify these equivalences.

• Ask for examples

The system can mark questions that ask the student to provide an example. Giving examples is a higher-order skill that was impossible to assess with conventional CAA systems.

• Intelligent randomization

AiM can randomise problems in such a way that the level of difficulty is kept as a constant. For example, if a question asks the student to ‘ diagonalize ’ a 2 by 2 matrix, then the system can randomise this problem in a way that guarantees that the answer always contains only integers. The trick is to reverse-engineer the randomised question from a randomised answer.

• Give feedback and partial credit

AiM can check each condition separately on the student’s answer and assign partial credit accordingly.

Plagiarism and programming

Plagiarism in student programming code production has been addressed in a number of projects, that both enable the checking of code authenticity but also enable debugging for the student.

JISC glossary on e-assessment

JISC launched a glossary of terms relating to e-assessment in 2006 for all sectors of the learning technology industry. http://www.jisc.ac.uk/assessment.html

REAP – Re Engineering Assessment Practices

Assessment is one of the most important drivers for transformational change; it determines both how and what students study. Yet, research shows that prevailing modes of assessment often promote high teacher workloads rather than enhanced student learning. There is a need to rethink institutional assessment systems - away from a model where teachers transmit marks, to one where students develop, over the course of a degree, their own ability to self-assess and self-correct. This project will involve curriculum re-engineering within three institutions and the dissemination of improved models of assessment practice supported by technology across the HE sector. Each partner will pilot a range of approaches and e-learning technologies in support of assessment. The initial focus will be on large enrolment first year classes, with more than 3000 students involved in the first implementation. The scope will be broad, going well beyond online tests and simulations to include classroom communication systems, virtual learning environments, e-portfolios, student records systems and, importantly, online-offline approaches. The project will demonstrate how teacher workload can be reduced and learning quality enhanced. Models of departmental transformation, re-engineered assessment practices, planning tools, web-based resources and a programme of dissemination will ensure that the whole Scottish HE sector benefits. A cost-benefit analysis of changes in departmental workload and assessment processes will provide evidence of effectiveness.

Comments (0)

You don't have permission to comment on this page.