Learn how your words are counted in IELTS. This page explains about counting words, numbers and symbols. You need to know how words are counted for IELTS listening, reading and writing. If you make mistakes with the number of words, you can lose points which can affect your band score.
How words are counted in IELTS
1. Numbers, dates and time are counted as words in writing. For example 30,000 = one word / 55 = one word / 9.30am = one word / 12.06.2016 = one word. In listening, 30,000 is counted as one number and 9.30AM is also counted as one number.
2. Dates written as both words and numbers are counted in this way: 12th July = one number and one word.
3. Symbols with numbers are not counted. For example, 55% = one number (the symbol “%” is not counted as a word). However, if you write “55 percent” it is counted as one word and one number.
4. Small words such as “a” or “an” are counted as one word. All prepositions, such as “in” or “at” are also counted. All words are counted.
5. Hyphenated words like “up-to-date” are counted as one word.
6. Compound nouns which are written as one word are also counted as one word. For example, blackboard = one word.
7. Compound nouns which are written as two separate words, are counted as two words. For example, university bookshop = two words.
8. All words are counted, including words in brackets. For example in IELTS writing, “The majority of energy was generated by electricity (55%).”. This sentence is counted as 9 words. The number in brackets is counted.
Advanced IELTS Writing Lessons: Liz’s Advanced Lessons
Main IELTS Pages
Develop your IELTS skills with tips, lessons, free videos and more.
This paper shows that the nature of letters—consonant versus vowel—modulates the process of letter position assignment during visual word recognition. We recorded Event Related Potentials while participants read words in a masked priming semantic categorization task. Half of the words included a vowel as initial, third, and fifth letters (e.g., acero [steel]). The other half included a consonant as initial, third, and fifth (e.g., farol [lantern]). Targets could be preceded 1) by the initial, third, and fifth letters (relative position; e.g., aeo—acero and frl—farol), 2) by 3 consonants or vowels that did not appear in the target word (control; e.g., iui—acero and tsb—farol), or 3) by the same words (identity: acero–acero, farol–farol). The results showed modulation in 2 time windows (175–250 and 350–450 ms). Relative position primes composed of consonants produced similar effects to the identity condition. These 2 differed from the unrelated control condition, which showed a larger negativity. In contrast, relative position primes composed of vowels produced similar effects to the unrelated control condition, and these 2 showed larger negativities as compared with the identity condition. This finding has important consequences for cracking the orthographic code and developing computational models of visual word recognition.
consonants and vowels, ERPs, letter processing, relative position coding, visual word recognition
Printed word recognition is a key process for reading. Whether or not words are recognized visually as a whole via their constituent letters has been debated for more than a century. Although early research suggested that words could be identified by the use of word shape (see Cattell 1886) and some investigators still argue that supraletter features such as word shape play a role in visual word recognition (e.g., Healy et al. 1987; Healy and Cunningham 1992; Allen et al. 1995), most theorists currently support the idea that words are initially formed from component letters (analytical models; e.g., the search model, Forster 1976; the multiple read-out model, Grainger and Jacobs 1996; the interactive–activation model, McClelland and Rumelhart 1981; and the activation–verification model, Paap et al. 1982). Recent research has shown that words in alphabetic languages are processed via their constituent letters (see Pelli et al. 2003). However, the process of coding for letters (i.e., how letters are assigned within words, including their identity and position) remains poorly understood. To recognize a printed word, we need to process the identity and position of its letters, hence distinguishing between trail and trial, but not between tABLe and TabLE or chair and CHAIR. Thus, 1 critical issue for understanding visual word recognition is the nature and functioning of the orthographic input code to the system. Recently, a neurobiological model has been proposed (the local combination detector or LCD model; Dehaene et al. 2005; see also Vinckier et al. 2007) as a new attempt to crack the orthographic code. One important aspect of the LCD model is that it distinguishes between letter identity and letter order processes, taking into account neurobiological constraints and the development of new coding schemes for computational models of visual word recognition. According to the Dehaene et al. proposal, the brain decodes words through a hierarchy of LCDs in the occipito–infero–temporal pathway, which are sensitive to increasingly larger fragments of words. In particular, they tentatively propose detectors for letter shapes in V4, abstract letter detectors in V8, which represent letters denoting their identities but abstracted from their visual appearance (e.g., CaSe, font, and size), and detectors for letter strings in the left fusiform gyrus. However, as with current computational models, this proposal assumes that the process of assigning letters within words is similar for both consonant and vowels. Here, we present evidence from event related potentials (ERPs) that challenges this assumption.
Most current computational models of visual word recognition (e.g., McClelland and Rumelhart 1981; Paap et al. 1982; Grainger and Jacobs 1996; Coltheart et al. 2001; Perry et al. 2007) have assumed that letter position coding was “channel specific.” That is, letters are assumed to be tagged to their positions in the orthographic representation of the word presented when the identities of the letters have been encoded. However, 2 recent findings have challenged this assumption: the transposed-letter effect and the relative position priming effect.
The transposed-letter priming effect refers to the fact that nonword primes created by transposing 2 letters from a real word produce form-priming effects, relative to the orthographic control creating by replacing letters (e.g., jugde–JUDGE vs. jupte–JUDGE; Perea and Lupker 2003, 2004; see also Perea and Carreiras 2006a, 2006b, 2006c, 2008; Duñabeitia et al. 2007; Perea et al. 2008). The relative position priming effect has been defined as a “variety of orthographic priming that involves a change in length across prime and target such that shared letters can have the same order without being matched in terms of absolute, length-dependent position” (Grainger 2008, p. 16). Target words like BALCON preceded by nonword primes created by replacing some letters with hyphen marks (e.g., B-LC-N; the absolute position condition) or with the same letters without the hyphens (e.g., BLCN; the relative position condition) are recognized faster than when preceded by nonsense symbol strings (e.g., %%%%%%), or than when preceded by unrelated control strings (e.g., CRTR) (see also Humphreys et al. 1990). Importantly, the size of the effect is the same for the absolute and the relative position conditions. In addition, the effects are canceled when relative position is violated (e.g., BCLN instead of BLCN) (Peressotti and Grainger 1999; Grainger, Granier, et al. 2006), that is, when the same letters as those used as relative position primes are presented in a partially scrambled order. This condition produces similar reaction times as those to the unrelated primes. The relative position priming effect also vanishes at long prime durations (ca. 80 ms) (Grainger, Granier, et al. 2006). Thus, because these effects are short lived and similar for absolute and relative position primes, the findings suggest that the orthographic information extracted at early stages of processing is letter identity and the relative ordering of letters in the input string, with no specific encoding of the absolute position of each letter.
The transposed-letter effect and the relative position priming effect generate serious problems for standard computational models (e.g., McClelland and Rumelhart 1981; Grainger and Jacobs 1996; Coltheart et al. 2001) because none of them can account for either of these 2 effects: 1) transposed-letter neighbors are more similar to target words than replaced-letter neighbors and 2) 2 strings that differ in length and therefore in the absolute position of the letters but that do not differ in the relative order/position of the letters activate each other to a great extent. However, a number of input “coding schemes” have recently been proposed that successfully capture the existence of these effects (e.g., SERIOL model, Whitney 2001; SOLAR model, Davis 1999; open-bigram model, Grainger and van Heuven 2003; Overlap Open-Bigram model Grainger, Granier, et al. 2006; and overlap model, Gomez et al. 2008), although the basic mechanisms of how letter position is encoded differ across these models (e.g., via the activation of open bigrams in the SERIOL and open-bigram models, via a spatial coding in the SOLAR model, or via a noisy perceptual input in the overlap model). Interestingly, these input schemes are favored in the LCD model. There is 1 caveat, however: These models assume that consonants and vowels are processed in exactly the same way, but this does not seem to be the case.
Recent transposed-letter experiments (e.g., Perea and Lupker 2004; see also Perea and Carreiras 2006a; Carreiras et al. 2007, 2009) have found a differential role of consonants and vowels in transposed-letter similarity effects. For instance, Perea and Lupker (2004) obtained a masked priming effect for consonant transpositions (relovución–REVOLUCIÓN vs. the replaced-letter control condition (retosución–REVOLUCIÓN), but not for vowel transpositions (reluvoción–REVOLUCIÓN vs. relavición–REVOLUCIÓN) (see Duñabeitia et al. 2009 for the suitability of this control). There are specific hypotheses suggesting that the consonant–vowel structure is determined and coded very early in word processing (e.g., Berent and Perfetti 1995). Furthermore, recent research with other paradigms and methodologies (e.g., reaction time ERPs, functional magnetic resonance imaging) in auditory and visual word recognition suggests that the processing of vowels and consonants may be different. For instance, adults are more likely to replace vowels than consonants when instructed to change 1 phoneme from a nonword to make it a word (“kebra” becomes “cobra” rather than “zebra”) (Van Ooijen 1996; Cutler et al. 2000). Moreover, participants find it very difficult to use transitional probabilities between successive vowels when presented with an auditory string of artificial speech and later on are asked to identify possible words, whereas this process is much easier for successive consonants, demonstrating the greater importance of consonants for word identification than of vowels (Peña et al. 2002; Bonatti et al. 2005). Vowels and consonants produce different effects on regional brain activation when participants are engaged in a lexical decision task with pseudowords created by changing 2 consonants or 2 vowels (Carreiras and Price 2008). Readers find more difficulty in recognizing a word when 2 consonants as compared with 2 vowels are slightly delayed during the presentation of the word (Lee et al. 2001, 2002; Carreiras et al. 2009). Infants fail to discriminate vowels in a lexical acquisition task, whereas they do discriminate between consonants (Nazzi 2005). Finally, a number of neuropsychological studies have found a double dissociation, so that some patients find vowels more difficult to produce than consonants, whereas others show the reverse pattern (e.g., Caramazza et al. 2000).
One potential target area for identifying differences between consonants and vowels is the letter assignment process in visual word recognition. If differences are found here, the effect will have very important theoretical consequences for the development of computational models of word recognition and letter identification, as well as for the LCD model, which assumes that some of the new coding schemes have neurobiological plausibility, but does not make any distinction between consonants and vowels in the letter assignment process. To investigate this possibility, we will measure ERPs on the relative position priming effect. Previous behavioral research (Duñabeitia and Carreiras submitted) has already shown that the relative position priming effect vanishes when the letters manipulated are vowels, whereas it is preserved when the letters are consonants (e.g., csn–CASINO vs. aia–ANIMAL). In addition, these authors showed that the relative position that only holds for consonants cannot be accounted for just by the frequency of letters: When they presented primes that contained only vowels, only high frequency consonants or only low-frequency consonants, they found a significant relative position priming effect of the same size for the 2 consonant conditions but no effects for vowel primes. Thus, despite the fact that vowels are typically of higher frequency than consonants, Duñabeitia and Carreiras showed that the frequency of the letters by itself is not the factor responsible for the vanishing of the relative position priming effect (note that, otherwise, high frequency consonants should have produced reduced priming effects as compared with low-frequency consonants). Finally, they also showed that the inclusion of repeated letters in the primes did not have an impact on the relative position priming and type of letter (consonant, vowel) interaction (e.g., lbl–GLOBAL and cbr–ICEBERG vs. iia–DIGITAL and iea–MINERAL). However, to claim that the consonant–vowel difference is influencing the letter position assignment process, it is critical to show that it is affecting the very early moments of visual word recognition. So far, some previous attempts to show early consonant–vowel effects using transposed-letter stimuli either showed late effects more related with response bias than with perceptual encoding (e.g., Carreiras et al. 2007) or only early effects for vowels (Carreiras et al. 2009). This paper investigates in detail the time course of the processing of vowels and consonants during letter position assignment in visual word recognition, using the relative position priming paradigm combined with the recording of ERPs.
ERPs are functionally decomposable to a greater extent than behavioral data, thus enabling conclusions not only about the existence of processing differences between vowels and consonants but also about the level of processing at which these differences occur. ERPs are voltage changes recorded from the scalp and extracted from the background electroencephalogram by averaging time-locked responses to stimuli onset. Of specific interest for our study are 3 components: N/P150, N250, and N400. The N/P150 component has a posterior scalp distribution focused over right occipital scalp sites and is larger (i.e., more positive) to target words that are repetitions of a prior masked prime word compared with targets that are unrelated to their corresponding masked prime (Holcomb and Grainger 2006). In addition, the N/P150 component has been found with the masked priming paradigm using single letters as primes and targets (Petit et al. 2006). They found that the P150 was significantly larger to mismatches between prime and target letter case, but more so when the features of the lower and upper case versions of the letters were physically different compared with when they were physically similar (e.g. A–A compared with a–A vs. C–C compared with c–C). Thus, this component seems to be sensitive to processing at the level of visual features. Recent research has shown that this early ERP component can take the form of a bipolar effect across frontal and occipital electrodes (e.g., Chauncey et al. 2008), so that although it is more positive over occipital areas, it is more negative over frontal areas of the scalp. The N250 component has been associated with the degree of prime–target orthographic overlap and phonological overlap in masked priming, suggesting that it is sensitive to processing sublexical representations (Grainger, Kiyonaga, and Holcomb 2006; Holcomb and Grainger 2006; Carreiras et al. 2009; Carreiras, Perea, et al. 2009), at least when the prime strings do not constitute real words (Duñabeitia et al. 2009). The N250 has a more widespread scalp distribution than the N/P150. Two windows can be isolated in the N250, the first window being more sensitive to orthographic processing and the second to phonological processing (see Grainger, Kiyonaga, and Holcomb 2006; Carreiras et al. 2009; Carreiras, Perea, et al. 2009). The N400 component is a negative deflection occurring around 400 ms after word presentation that has been associated with lexical–semantic processing (see Kutas and Federmeier 2000; Holcomb et al. 2002; Muller, Duñabeitia, and Carreiras, unpublished data). Specifically, the amplitude of this negativity is an inverse function of lexical frequency, of lexicality (e.g, Neville et al. 1992; see also Carreiras et al. 2005; Barber and Kutas 2007). In addition, items from small orthographic or syllabic neighborhoods produce an N400 of smaller amplitude than items from a large orthographic or syllabic neighborhood (Holcomb et al. 2002; Barber et al. 2004).
In particular, we asked: 1) whether consonants and vowels have a differential influence on the letter assignment process in visual word recognition, for which the relative position priming was manipulated for vowels and consonants, and 2) whether the time course for consonants and vowels in the letter assignment process differs, because it has been shown to be different in letter identification (Carreiras et al. 2009). To this end, we sought differences in the relative position priming effects between strings that share only consonants (e.g., “frl” priming “farol,” the Spanish for “bluff” and “lantern”), or that share only vowels (e.g., “aeo” priming “acero,” the Spanish for “steel”). We also included an identity condition (e.g., “farol” priming “farol” and “acero” priming “acero”) and an unrelated condition (e.g., “tsb” and “iui”) as control conditions.
If the contribution of consonants and vowels to the letter assignment process is different and has a different time course, we expect our manipulation to have a differential impact on the ERP components described above. More specifically, for both consonant and vowels, we expect a difference in early perceptual components between the identity and the other 2 conditions, reflected in the N/P150 component, the reason being that there is a lower perceptual change in the identity condition.
Differences between relative position priming for consonants and vowels are expected in the N250 component (i.e., a component that has been posited to be sensitive to orthographic overlap), assuming that there is differential orthographic processing of these 2 types of letters in the initial stages of visual word recognition. The reason for this expectancy relies on the tight link between consonants, orthographic processing and lexical selection processes, and the importance of vowels in orthographic encoding, because the role of vowels is more related to rhythmic and syntactic patterns, as proposed by Nespor et al. (2003) (see also Carreiras and Price 2008). Finally, late differences between the manipulation of consonants and vowels should also be noticeable in the N400 component, according to the lexical constraint hypothesis: Vowel primes and consonant primes may trigger different patterns of activation in the lexicon, and more critically, different numbers of lexical candidates. Consonant primes are much more constraining than vowel primes, because there are far fewer words consistent with the consonant primes than with the vowel primes. Hence, vowel primes activate many lexical units and therefore produce more dispersion of the activation and probably more competition during the lexical selection processes.
Twenty-seven undergraduate students (16 women) from the University of La Laguna participated in the experiment in exchange for course credit. All of them were native Spanish speakers, with no history of neurological or psychiatric impairment and with normal or corrected-to-normal vision. All participants were right handed, as assessed with an abridged Spanish version of the Edinburgh Handedness Inventory (Oldfield 1971).
A total of 258 words were selected from the Spanish LEXESP database (Sebastián-Gallés et al. 2000) and analyzed with the B-PAL software (Davis and Perea 2005). All of the words were 5 or 6 letters long (mean: 5.47). Half of these words (namely, 129) included a vowel as first, third, and fifth letters (e.g., acero [steel]). The mean frequency of these words was 14.23 appearances per million (range: 0.18–204.46); the mean length was 5.57 letters; and the mean number of orthographic neighbors was 1.51 (range: 0–8). The other half (the remaining 129 words) was composed of words that included a consonant as first, third, and fifth letters (e.g., farol [lantern]). These words were matched to the previous ones in frequency (mean: 17.97; range 0.18–201.07), length (mean: 5.36), and number of orthographic neighbors (mean: 2.20; range: 0–11) in a pairwise manner using the Match software (van Casteren and Davis 2007). See Table 1 for a summary of the characteristics of the materials. These words could be preceded 1) by the first, third, and fifth letters (relative position priming condition; e.g., aeo–acero and frl–farol), 2) by 3 consonants or vowels that did not appear in the target word (control priming condition; e.g., iui–acero and tsb–farol), and 3) by the same word (identity priming condition, e.g., acero–acero and farol–farol) as an extra control condition. Three lists of materials were created so that each target appeared once in each, but each time in a different priming condition. Different participants were assigned to each of the lists.
Characteristics of the materials used in the experiment
|Mean (SD)||Range||Mean (SD)||Range||Mean (SD)||Range|
|Consonants (farol)||17.97 (28.84)||0–201||5.36 (0.48)||5–6||2.27 (2.15)||0–8|
|Vowels (acero)||14.23 (29.24)||0–204||5.57 (0.50)||5–6||1.51 (1.66)||0–11|
|Mean (SD)||Range||Mean (SD)||Range||Mean (SD)||Range|
|Consonants (farol)||17.97 (28.84)||0–201||5.36 (0.48)||5–6||2.27 (2.15)||0–8|
|Vowels (acero)||14.23 (29.24)||0–204||5.57 (0.50)||5–6||1.51 (1.66)||0–11|
In order to make the go/no-go semantic categorization possible, we included a set of 40 animal names in the item set of each list (e.g., erizo [hedgehog], lince [linx]) of similar length, frequency, and structure to the critical words (targets: mean length 5.60, mean frequency 5.54, and number of orthographic neighbors 2.05; primes: mean length 5.55, mean frequency 9.78, and number of orthographic neighbors 2.07). These words were primed by a new set of 40 unrelated nonanimal prime words. A prime visibility test was also included in order to check for conscious identification of the masked primes. To this end, the 40 animal names were presented as primes and followed by 40 unrelated nonanimal target words. Thus, each list contained 338 trials of which 258 were experimental trials, 40 were trials with animal names for the go/no-go semantic categorization task, 12% of the total amount of trials, and finally 40 were trials for the prime visibility test, 12% of the total amount of trials.
Participants were individually tested in a well-lit soundproof room. The presentation of the stimuli and recording of the responses was carried out using Presentation software on a computer associated to a CRT monitor. All stimuli were presented on a high-resolution monitor that was positioned at eye level 80 cm in front of the participant. Each trial consisted in the presentation of a forward mask created by hash mark symbols for 500 ms, followed by the displaying of the prime for 50 ms, and immediately followed by the presentation of the target. Primes and targets were presented in lowercase following previous work on the relative position priming effect (e.g., Peressotti and Grainger 1999; Grainger, Granier et al. 2006) in which primes and targets were also presented in the same case. Primes and targets were presented in Courier New font. In order to minimize physical overlap between primes and targets, different font sizes were used for these strings. Each character of the prime strings had a width of 0.12 in, whereas each character of the targets had a width of 0.16 in (note that Courier New font is a nonproportional font in which all letters occupy the same amount of space). Under these conditions, no saccades were required during reading of each stimulus, because the strings filled less than 1.5° of the visual field. Target items remained on the screen for 500 ms. The intertrial interval varied randomly between 700 and 900 ms. After this interval, an asterisk was presented for 1000 ms in order to allow participants’ blinks (see Fig. 1 for schematic representation of each trial). All items were presented in a different random order for each participant. Participants performed a go/no-go semantic categorization task: They were instructed to press the spacebar on the keyboard only when the letter string displayed referred to an animal name. Twenty warm-up trials, containing different stimuli from those used in the experimental trials, were provided at the beginning of the session. Participants were asked to avoid eye movements and blinks during the interval when the row of hash marks or the asterisk was not present. Each session lasted approximately 1 h and 15 min.
Electroencephalogram Recording and Analyses
Scalp voltages were collected using a BrainAmp recording system from 32 Ag/AgCl electrodes that were mounted in an elastic cap (ElectroCap International, Eaton, USA, 10-10 system). Figure 2 shows the schematic distribution of the recording sites. Linked earlobes were used as reference. Eye movements and blinks were monitored with 4 further electrodes providing bipolar recordings of the horizontal (Heog−, Heog+) and vertical (Veog−, Veog+) electrooculogram. Interelectrode impedances were kept below 10 K. EEG was filtered with an analogue band-pass filter of 0.01–100 Hz, and a digital 30-Hz low-pass filter was applied before analysis. The signals were sampled continuously throughout the experiment with a sampling rate of 256 Hz.