Testing Writing




To test the writing skill, ask the students to write.
To make the students wrte, we have to consider three things. The first is deciding the topic which represents the gorp of the students. It means that we need to find the appropriate topic that all students can produce. The second is that the tast should invite the valid sample of writing. The third, the task should be able to be assessed with its validity and reliability.
Representative Tasks
1.      Specify all possible contents
There are several elements that need to be considered in the writing test specifications. Those are operations, types of the text, addresses, length of texts, topics, dialect, and style. In addition to those points, there are also more detailed elements. It is also called additional elements which appear when they are needed. The elements in each activity or task will be different. For the more restricted task, the purpose is to find out whether the students’ written English was adequate for tudy through the medium of English at a particular overseas university. This task is usually integrated with listening component of the test. Also, using the suggested framework will help us to describe the relevant tasks more succinctly.
2.      Include a representative sample of the specified content
The ideal test will be the one which requires mor ecandidate to perform all the relevant potential writing tasks. The score given to the candidate should really reflects their ability.
The examiners have made a serious attempt to create a representative sample of task. But then channel does is not possible in a single version of the test. It becomes a problem to which there is no easy answer. Since it is only under the heading of operations that there is any significant variability, a test that requires the student to write for answers could cover the whole range of tasks, assuming the differences of topic really did not matter. In fact, the right thing components of each version of the test contain two writing task and so 50% of all tasks were to be found in which version of test. If the result is going to be very important to candidates, it could determine whether they are allowed to study overseas. Sometimes the same toss is unlikely to be perfectly consistent. Therefore we have to offer candidates as many Fresh Starts as possible.  The main point is that we test the ability to write. We should ignore or do not care how they are, such as being creative imaginative or if an intelligent.
 Another ability that at times interferes with the accurate measurement of writing ability is that of reading. One way to reduce dependence on the candidate's ability to write is to make use of illustrations.
Writing task should be well define. The candidates should know just what is required of them and they should not be allowed to go too far astray. Bullfighting information in the form of note is very useful device. The task should not only fit well with the specifications but they should also be made as authentic as possible. For ensuring valid and reliable scoring, there are some points that we have to consider. Those are setting tasks which can be reliable scored,  setting as many times as possible,  restricting candidates, giving no choice of task, ensuring long enough samples, creating appropriate scales for scoring, holistic scoring, and analytic scoring.
 In constructing a rating scale, we have to  formulate several questions. The first is the purpose of testing. Then deciding whether the scoring should be analytic or holistic or both.  Also, the components the scales should have and the spirit levels should be considered. Then search for existing scales and modify the existing scales to suit your purpose. Is trial the scale you have constructed to make the modifications proof necessary.
The  next buying is following acceptable scoring procedures. It is important that scoring should take place in a quiet, well-lit environment. The most people scoring should ensure score of the reliability even if not all scores are using quiet the same standard. Once scoring is completed, it is  useful to carry out simple  statistical analysis to discover if anyone scoring is unacceptably abarrent. The last is feedback. If this very important and  useful to help the students recognize which part did they need to revise and which part it is already good.
Satriani, Emilia, and Gunawan (2012) conducted a study on the implementation of contextual teaching and learning approach to teaching English writing to second graders of a Junior High Shool in Bandung. The study aims to investigate the strategies of Contextual Teaching and Learning (CTL) (as adapted from Crawford, 2001) and the advantages of using CTL approach. The study employed a qualitative case study research design. The data were obtained from several instruments, namely class observations, students’ interview and students’ writing products which were then analyzed using writing assessment criteria taken from Rose (2007, as cited by Emilia, 2011, p. 151). The findings revealed that the teaching writing program was successful to improve students’ recount writing skill. Specifically, they showed some improvement on schematic structure, grammar roles, and graphic features. Moreover, the data from observation, interview, and documentation of students’ text showed some benefits of CTL. These include: (1) engaging students in the writing activity; (2) increasing students’ motivation to participate actively in the writing class; (3) helping students to construct their writing; (4) helping students to solve their problems; (5) providing ways for students to discuss or interact with their friends; and (6) helping the students to summarize and reflect the lesson. Based on these findings, it is recommended that CTL be implemented in teaching writin

Reference
Satriani, I., Emilia, E., and Gunawan, H. 2012. Contextual Teaching and Learning Approach to Teaching Writing. Indonesian Joural of Applied Linguistics. Vol 2 No 1 http://ejournal.upi.edu/index.php/IJAL/article/view/70




Chapter Report
Testing Reading
Making a test of reading skills and oral skill are different. Some people argue that making tests for reading skill is very easy. They can take a text and formulate several questions and given to the students. They call this process as making a test.  I know it makes sense that this process is called  giving the students questions that it is considered inappropriate are the test is not very good. In short, we probably did not measure what we want to measure or invalid.
The main problem is that we cannot measure the students reading ability without making them produce something for example through oral or written form. Editing skin is deceptive skill it means that they received information, not produce information. There are two reasons regarding this problem. The first is that there is uncertainty  about the skills which may be involved in reading. Second, although we believe in the existence of a particular skill, it is still hard to know whether an item has succeeded in measuring it.  To cope with this problem, we can position ourself. We are readers our self and are aware of at least some of them. On one occasion we may read slowly and carefully and another time we may flip from page to page.
If we reflect on our reading, we become conscious of other skills we have. As we read information, we are continually making inferences about people things and events. It wouldn't be helpful to continue giving examples of the reading skills we know we have. In conclusion, we do realize they exist.
To be consistent with our general framework for specifications, we will refer to the skill that leaders perform when reading a text us operations. There has been a tendency in the past for expeditious reading to be given less prominent in test that it deserves.  In order to infer about the topic quickly, we can make pragmatic inferences. The knowledge that is needed from outside the text must be knowledge which can be assumed to have. 
There are a number of parameters regarding the text.  Those are type, form, graphic features, topic, style, intended readership, length, readability or difficulty, range of vocabulary and grammatical structure. Reading speed refers to the words being read per minute. Every person has different speed in reading.
Regarding the criteria level of performance, there is no need to specify criteria levels of performance before tests are constructed are even before they are administered.  Setting criteria levels for receptive skills is more problematic. The best way to proceed is to use the test tasks themselves to define the level. All of the items should be within their capabilities of anyone to whom we are prepared to give a pass. It means, in order to pass a candidate should be expected in principle to score 100%.
In setting the task,  we first must select the text. After that, we write the items. The purpose should be the write items that will measure the ability in which we are interested. It is also for eliciting reliable behavior from candidates and permit highly reliable scoring. The next is possible techniques used should interfere as little as possible with the reading itself. Also they should not add significantly difficult task on top of reading.  For multiple choice, the candidate provides evidence of successful reading by making a mark against one out of a number of alternatives.
For short answer,  it will be better by involving a eunuch correct response that there is only one possible correct response. It can be a single word or something longer but no more than one sentence. This technique can be used to test the ability to make various distinctions. Also, it can be used to write items related to the structure of a text. We should remember that discovery of sequencing items of this kind can be problematic. If the test taker put one element of the text out of sequence it may cause others to be displaced and require complex decision-making on the part of the scorers.  The disadvantages is that the test taker who answer after reading the relevant part of the passage possibly will not be able to express it well.
  Another technique that can be used in testing reading is Gap filling such as we want to know whether the candidate has grasp the main idea of the following paragraph. The disadvantage of gap-filling is that the candidate has to provide one word which is not in the passage.
Information to transfer is a technique to decrease the demands on candidates writing ability.In conclusion responses should make minimal demands on writing ability. Where are the candidates shed a single native language, what's the technique of items and responses can be used. The procedure for writing items should be started careful reading of the text.  Then decide what does it is reasonable to expect candidates to be able to perform in relation to this.
Al-Jamal, D., Al-Hawamleh, M., and Al-Jamal, G. (2013) conducted a study which aimed to assess the level of reading comprehension proficiency of EFL Jordanian readers with regard to the relationship between identifying the main idea in a paragraph and language proficiency in expository texts. Investigating the comprehension instruction process by EFL teachers also becomes the other focus of this study. It demonstrated an intensified focus on the significance of the main idea while reading comprehension instruction takes place. The focus of the current mixed-methods study was on the descriptive data from a reading comprehension test, as well as, the classroom observation data. The sample of the study consisted of (649) 10th graders distributed randomly on Irbid directorate of education schools in the scholastic year (2011/2012) who undertook a reading comprehension test as developed for the purposes of this study. Also, the sample of the study included (15) teachers who were observed three lessons for each single teacher. The results of the study revealed a moderate reading comprehension proficiency level among 10th graders along with negligible instruction of comprehension skill by EFL teachers.
Reference
Al-Jamal, D., Al-Hawamleh, M., and Al-Jamal, G. 2013. An Assessment of Reading Comprehension Practice in Jordan. Journal of Educational Sciences Vol. 9, No. 3, pp -335-344 335. http://journals.yu.edu.jo/jjes/Issues/2013/Vol9No3/7.pdf
Hughes, A. 2003. Testing for Language Teachers. Cambridge. Cambridge University

Comments

Popular Posts