The Assessment of Machine Translation According To Holmes’ Map of Translation Studies | January 2018 | Translation Journal

January 2018 Issue

Read, Comment and Enjoy!

Join Translation Journal

To receive regular updates,
fill in your details below.
You will also receive a PDF listing
8 Ways to Ignite your Translation Career.
Join now. 

The Assessment of Machine Translation According To Holmes’ Map of Translation Studies


According to Holmes’ map (1972), there are two branches of pure and applied translation studies. Pure branch is divided into descriptive and theoretical translation studies. DTS has three orientations. THTS is split into general and partial theories, with further six categories of partial theories. Finally, applied branch deals with translation’s use. Machine translation means the use of computers to translate texts. It has a long history, several approaches and architectures, four basic types, some closely related concepts, and some drawbacks.

This paper attempts to assess machine translation (MT) based on Holmes’ map of translation studies. It starts with describing Holmes’ map and a brief taxonomy of MT and its different aspects. There are then ten sections, each examining MT based on one part of Holmes’ map. The last section includes the comparison and investigation of findings in order to make concluding points.

The most important conclusion made by the researcher in this paper is that machine translation can be assessed according to all aspects of Holmes’ map of translation studies. This point confirms Holmes’ statement that a single theory can be assigned to several restrictions by his map simultaneously. The researcher expands this claim from the restrictions to the whole map and all its parts and branches.

Key Words: Holmes’ map, translation studies, MT, assessment


Translation as an art of transferring a work from one language into another is as old as written literature. In our modern civilization, the need to translate is ever growing and its significance in different fields can’t be ignored (Homiedan, 1998). Translation is today considered mostly as a discipline. Its study as an academic subject has begun in the recent years. This discipline is now generally known as ‘translation studies’, thanks to the Dutch-based US scholar James S. Holmes. In his paper delivered in 1972, but not widely available until 1988, Holmes describes this discipline as being concerned with ‘the complex of problems clustered round the phenomenon of translation’ (Munday, 2008).

A famous classification of translation concepts and theories has been given by Holmes in his ‘map of translation studies’. According to his scheme, translation studies are classified into ‘pure’ and ‘applied’ areas. ‘Pure’ translation studies are subdivided into ‘descriptive’ and ‘theoretical’ studies. Descriptive translation studies (DTS) are categorized into three orientations: product-oriented, function-oriented, and process-oriented. Theoretical translation studies (THTS) are either ‘general’ or ‘partial’. Partial theories are restricted based on medium, area, rank, text-type, time, or problem. The next branch of translation studies is ‘applied’ one, referring to the application of translation in other fields and disciplines.

Translation is done by either a human translator or a computer device. Their simultaneous presence is also possible. These notions lead to the emergence of such a field as ‘machine translation’. Machine translation (MT) means computerized systems responsible for the production of TTs with or without human assistance (W. J. Hutchins, 1995). Conception of such a machine implies a thorough investigation of the relationship between language and thought (Delavenay, 1960). It can be dug from different facets (Khalilizadeh, 2016). It has a broad history for itself, comprises different architectures and approaches, involves several concepts like human involvement, controlled language and sublanguage, pre-editing and post-editing, current status and future perspective, etc.

Although the origins of MT is traced back to seventeenth century ideas of universal and philosophical languages and mechanical dictionaries, it was not until the twentieth century that the first practical suggestions were made (Chéragui, 2012). Warren Weaver, in 1947, proposed the incorporation of computer in the translation process. This was the initiation of the pioneering period which lasted until the production of the first operational translation system. In 1954, under the collaboration of IBM and a group from Georgetown University, the demonstration was a big success, leading to the establishment of more MT research centers. Several approaches were put forward from 1950s to mid 1960s. The mid-seventies is regarded as the revival period, witnessing the installation of Systran in the European community and the success of the Meteo System in Canada. The following period was the busy years of research and development of the indirect approach, i.e. rule based syntax-oriented stratification with abstract transfer representation (Homiedan, 1998).

There are different MT architectures, approaches, and types. Regarding architecture, MT is divided into linguistic and computational architectures. Linguistic architecture includes three approaches, direct, transfer, and interlingua respectively. Computational architecture involves rule-based and corpus-based approaches. Knowledge-based MT is a subcategory of rule-based MT. Corpus-based MT is further subdivided into statistical and example-based MT. There are also principles-based, lexicalist, and connectionist approaches. MT for watcher, for revisers, for translators, and for authors, are four basic types of MT systems.

Controlled language, sublanguage, pre-editing, post-editing, speech translation, and online MT are further issues related to the notion of MT. Controlled language is an explicitly defined restriction of a natural language which specifies constraints on lexicon, grammar, and style (Somers, 2003). Sublanguage is a language used to communicate in a specialized technical domain or for a specialized aim (Shuttleworth & Cowie, 2014). Pre-editing is conducted to disambiguate a text, to simplify the flow of thought and to conduct syntactic and semantic analysis so as to arrive at an acceptable raw translation. Post-editing involves such tasks as cleaning up the language, adjusting format and reviewing for technical accuracy, and editing for accuracy especially in fields which require very high quality translation (Homiedan, 1998). Research on spoken translation (within highly restricted domains) has recently developed into a major area of MT activity (J. Hutchins, 2001). Modern machine translation services have made significant strides to allow users to read content in other languages (Denkowski, 2015).

1.1.Statement of the Problem

Machine translation is a challenging issue in the field of translation studies (Khalilizadeh, 2016). It is one of numerous types of translation discussed in the literature of translation studies. It involves a large number of concepts and classifications. It can therefore be investigated based on the map suggested by James S. Holmes for the area of translation studies. This is the main problem discussed in this paper.

1.2.Research Significance

Machine translation is a single type of translation among a large number of translation types mentioned throughout the history. There are religious, literary, legal, audiovisual, and many other types of translation as well. A researcher can select one of so many translation types and examine it based on Holmes’ map of translation studies. This paper is a very helpful and guiding work for such researchers.

Progress is a permanent feature of MT. Machine translation has encountered vast developments in a variety of aspects in the recent years. This is due to the increase in the knowledge of MT companies and designers. As they become more aware of MT users’ needs and expectations, they will produce MT systems with more capacities and facilities. This paper will contribute to these individuals and companies to meet such a purpose.

1.3.Research Questions

Q1: How is MT investigated according to descriptive translation studies?

Q2: How is MT investigated according to theoretical translation studies?

Q3: How is MT investigated according to applied branch of translation studies?

1.4.Research Hypotheses

H1: MT history, approaches, and architectures are product-oriented. The distinctions between generic-special-purpose systems and between MT and HAMT are function-oriented. Such concepts as pre-editing and post-editing and the separation of MT from HAMT are process-oriented.

H2: The distinctions between MAHT and HAMT, between oral and written translation, and MT approaches are medium-restricted. MT history, language number, and language direction are area-restricted. CL and sublanguage are rank-restricted. MT history and types are text-type restricted. MT history is time-restricted. It is possible to consider all facets of MT as problem-restricted.

H3: Human’s role in MT is related to the area of applied branch of Holmes’ map.

1.5.Research Limitations

MT involves connections between computer industry and translation discipline. CAT tools and localization are two further fields involving such connections and therefore closely related to the notion of MT. The former includes such tools as electronic dictionaries and translation memories used by human translators to facilitate their performance. The latter refers to the process of adapting and translating a product so that it will be compatible with the TL norms. Time and space limitations make the researcher restrict the scope of his research to MT and ignore these two areas.

Genre is a very influential factor in translation. There are so many genres as novel, poetry, sacred texts, technical materials, legal documents, and so on. They have different dimensions regarding the task of translation. Such differences increase in the case of machine translation. MT’s behavior for each genre may differ from that of another genre. Once again, time and space limitations result in the restriction of research’s scope to a general view of MT.

2.Literature Review

Translation is a broad notion, understood in many ways. One may talk of translation as a process or product and identify such sub-types as literary translation, technical translation, subtitling and machine translation. While it typically refers to the transfer of written texts, the term sometimes includes interpreting (Shuttleworth & Cowie, 2014). Holmes’ (The Name and Nature of Translation Studies) is regarded to be the ‘founding statement’ of a new discipline (Munday, 2008). This discipline is known as ‘translation studies’.

2.1.James S. Holmes and the Map of Translation Studies

James Stratton Holmes was a Dutch poet and translator. He was born in the United States, in Collins, Iowa, on May 2, 1924, and died in Amsterdam on November 6, 1986. Holmes moved permanently to the Netherlands in 1950. He sometimes published his work under his real name James S. Holmes, and sometimes under the pen names Jim Holmes and Jacob Lowland. In 1956, Holmes was the first non-Dutch translator to be awarded the prestigious Martinus Nijhoff Award, the most important recognition given to translators of creative texts from or into Dutch (n/a, 2016).

When the Literary Science faculty of the Universiteit van Amsterdam decided in 1964 to create a Department of Translation Studies, Holmes was invited to contribute as an associate professor. He not only had the scholarly background needed, but over time he had acquired many theoretical notions and considerable practical experience as a translator. He created courses for the Institute of Interpretors and Translators, later interrogated into the Institute of Translation Studies of the Universiteit van Amsterdam. Holmes’ work ‘The Name and Nature of Translation Studies’ (1972) is widely recognized as founding Translation Studies as a coordinating research program. His papers on translation made Holmes a key member of Descriptive Translation Studies. Still today he is frequently cited in the bibliographies in this field (n/a, 2016).

2.1.1.Pure Branch of Translation Studies (Descriptive Translation Studies)

As a field of pure research, translation studies has two main objectives which are (1) to describe the phenomena of translating and translation as they manifest themselves in the world of our experience, and (2) to establish general principles by means of which these phenomena can be explained and predicted. The two branches of pure translation studies associated with these objectives are designated descriptive translation studies (DTS) or translation description (TD) and theoretical translation studies (THTS) or translation theory (TTH) (Venuti, 2004).

Product-oriented DTS, that area of research which describes existing translations, has traditionally been a significant area of academic research in translation studies. Its starting point is the description of individual translations, or text-focused TD. A second phase is that of comparative TD, in which comparative analyses are made of various translations of the same text, either in a single language or in several languages. Such individual and comparative descriptions provide the materials for surveys of larger corpora of translations. Such descriptive surveys can be larger in scope, diachronic as well as synchronic, and one of the eventual goals of product-oriented DTS is a general history of translation (Venuti, 2004).

Function-oriented DTS is interested in the description of translation’s function in the recipient socio-cultural situation. It is a study of contexts rather than texts. Greater emphasis on it can result in the development of a field of study sociology for (or—less felicitous but more accurate, since it is a legitimate area of translation studies as well as of sociology—socio-translation studies) (Venuti, 2004).

Process-oriented DTS concerns itself with the process or act of translation. The problem of what exactly happens inside the translator’s mind as he creates a new text in another language has been the subject of much speculation on the part of translation’s theorists, but there has been very little attempt at systematic investigation of this process under laboratory conditions. But psychologists develop highly sophisticated methods to analyze and describe other complex mental processes, and it is hoped that in future this problem will be given closer attention, leading to an area of study which is called translation psychology or psycho-translation studies (Venuti, 2004).

2.1.2.Pure Branch of Translation Studies (Translation Theories)

The other main branch of pure translation studies, theoretical translation studies or translation theory, is, as its name implies, interested in using the results of descriptive translation studies, in combination with the information available from relevant fields and disciplines, to evolve principles, theories, and models which will serve to explain and predict what translation is and will be (Venuti, 2004).

The ultimate aim of the translation theorist in the broad sense must actually be to develop a full and inclusive theory accommodating so many elements that it can serve to explain and predict all phenomena falling within the domain of translation, to the exclusion of all phenomena falling outside it (Venuti, 2004).

Most of the theories produced so far are indeed little more than prolegomena to a general translation theory. A good share of them are not actually theory at all, in any scholarly sense of the term, but an array of axioms, postulates, and hypotheses so formulated as to be both too inclusive (covering also non-translatory acts and non-translations) and too exclusive (shutting out some translatory acts and some works generally recognized as translations). Others, though they may bear the designation of general translation theories, are indeed not general theories, but partial or specific in their scope (Venuti, 2004).

First of all, there are translation theories which are called medium-restricted translation theories, based on the medium used. Medium-restricted theories are subdivided into theories of translation as performed by humans (human translation), as performed by computers (machine translation), and as performed by both in conjunction (mixed translation). Human translation breaks down into oral translation or interpretation (with the further distinction between consecutive and simultaneous) and written translation (Venuti, 2004).

Area-restricted theories are of two closely related types; restricted as to the languages involved or to the cultures involved. In both language restriction and culture restriction, the degree of actual limitation may vary. Theories are feasible for translation between Persian and Arabic (language-pair restricted), translation within Semitic languages (language-group restricted) or from Slavic to Germanic languages (language-group pair restricted). Similarly, theories are developed for translation within Swiss culture (one-culture restricted), translation between Swiss and Belgian cultures (cultural-pair restricted), translation within Eastern Europe (cultural-group restricted) or between languages reflecting a pre-technological culture and the languages of contemporary Western culture (cultural-group pair restricted) (Venuti, 2004).

Third, there are rank-restricted theories, dealing with discourses or texts as wholes, but concern themselves with lower linguistic ranks or levels (Venuti, 2004). The term ‘rank’, in this context, refers to various linguistic levels such as morphemes, words, phrases, clauses, sentences, and paragraphs. This concept is highly observable in the theory of translation shift discussed by John Catford in 1965.

“Fourth, there are text-type (or discourse-type) restricted theories, dealing with the problem of translating specific types or genres of lingual messages” (Venuti, 2004, p. 180). The notion of text-type corresponds to the concept of ‘genre’ in literature. The text typologies rendered by Katharina Reiss in 1970s and Peter Newmark in 1980s are instances of such theories.

Fifth, there are time-restricted theories, falling into two types: theories regarding the translation of contemporary texts, and theories regarding the translation of texts from an older period (Venuti, 2004). English language, for instance, comprises three periods: old English, middle English, and modern English. Each of these periods has its own history of translation. Each period can be selected and the history and literature of translation at that time can be examined.

Finally, there are problem-restricted theories, confining themselves to one or some specific problems within translation studies, problems which can range from such basic questions as the limits of variance and invariance in translation or the nature of translation equivalence, to such more specific issues as the translation of proverbs and idioms (Venuti, 2004).

“It should be noted that theories can frequently be restricted in more than one way” (Venuti, 2004, p. 181). This means that the restrictions mentioned are not mutually exclusive. A number of restrictions can be assigned to a single phenomenon of translation. Translation of Orwell’s novel “Animal Farm” into Persian may be medium-restricted (translation by a human translator or by MT), area-restricted (translation from English into Persian), text-type restricted (translation of allegorical novels), and time-restricted (translation of 1940s works).

2.1.3.Applied Branch of Translation Studies

In Holmes’ scheme, Applied Translation Studies is further divided into four subsections. The first is translator training, probably the main area of concern. The second is the production of translation aids like lexicographical and terminological reference works, and grammars which are tailor-made to suit translators’ requirements. The third area is the establishment of translation policy, where the translation scholar’s task is to render informed advice to others in defining the place and role of translators and translations in society. The last one is translation criticism, the level of which is frequently very low, and in many countries still quite uninfluenced by developments within translation studies (Shuttleworth & Cowie, 2014).

2.2.Machine Translation (MT)

Although they do not represent a new theoretical model, new technologies have transformed translation and are now influencing its research and theorization (Munday, 2008). The demand for translation has recently increased because of increasing cross-regional communication and the need for information exchange. Most material needs to be translated. Some of this work is challenging and difficult but mostly it is tedious and repetitive, necessitating consistency and accuracy. It is becoming difficult for professional translators to meet translation’s increasing demands. In this situation, machine translation is used as a great help (Chéragui, 2012).

Machine translation (MT) is to use computer programs to translate texts from one natural language into another automatically. It is usually subsumed under the category of computer-(based) translation, together with computer-aided translation. Computer-aided translation is classified according to several criteria, including degree of intervention by human translator, whether the system provides generic or customized translation, and what system architecture or approach is employed (Baker & Saldanha, 2009).

In unassisted or fully automatic MT, the whole text is translated by computers without the intervention of human operators. These systems are sometimes called ‘batch’ systems since the whole text is processed as one task. Assisted MT is split into human assisted MT (HAMT) and machine assisted human translation (MAHT). In HAMT (also known as interactive MT), human translators intervene to resolve problems of ambiguity in the ST or to choose the most appropriate TL word or phrase for output. In MAHT, computer programs are utilized to help human translators carry out the translation. An increasingly popular form of MAHT is computer aided translation (CAT) (Baker & Saldanha, 2009).

MT is also classified according to the domain in which it is designed to translate. Generic MT systems are general purpose systems which translate texts in all subject areas or domains. Customized or special purpose systems are targeted at groups of users who work in specific domains (Baker & Saldanha, 2009). An MT system may be designed to translate in a specific field such as physics, chemistry, mathematics, biology, psychology, politics, and so on.

The number of languages involved and the direction of translation are two further factors based on which MT systems are classified. According to W. J. Hutchins (1995), systems are designed either for a couple of languages (bilingual systems) or for more than two languages (multilingual systems). Bilingual systems are designed to operate either in only one direction (unidirectional) or in both directions (bidirectional).

2.2.1.A Brief History

The history of MT is said to date from a period just after World War II, during which computers were used for code-breaking. The idea that translation is in some sense similar from computational perspective is attributed to Warren Weaver. Between 1947 and 1949, Weaver contacted a number of colleagues in the USA and abroad, trying to raise interest in the issue of using the new digital computers for translation. He made a link between translation and cryptography, though from the early days most researchers found that it was a difficult problem (Somers, 2003).

The first demonstration system was developed by a collaboration between IBM and the Georgetown University in 1954 (J. Hutchins, 2001). Over the next 10 to 15 years, MT research groups started works in several countries, including the USSR, Great Britain, Canada, and elsewhere. In 1964, the US government decided to see whether its money had been well spent, and set up the Automated Language Processing Advisory Committee (ALPAC). Its report, published in 1966, was highly negative about MT with very damaging results. It concluded that there was no immediate or predictable prospect of useful MT (Somers, 2003).

The 1970s and early 1980s saw MT research occurring largely outside the USA and USSR, namely in Canada, Western Europe and Japan. Canada’s bilingual policy resulted in the establishment of a significant research group at the University of Montreal. Several groups in France, Germany and Italy worked on MT, and the decision of the Commission of the European Communities in Luxembourg to experiment with the Systran system was very important. In Japan, some success with getting computers to handle the complex writing system of Japanese encouraged university and industrial research groups to examine Japanese-English translation (Somers, 2003).

By the mid 1980s, it was generally understood that fully automatic high-quality translation of unrestricted texts (FAHQT) was not a goal, readily achievable in the near future. Scientists in MT started to explore the ways in which usable and useful MT systems can be developed even if they fall short of this goal. Coming into the 1990s and the present day, MT and CAT products are marketed and used both by language professionals and by amateurs, the latter for translating e-mails and World Wide Web pages (Somers, 2003).


The term ‘approach’ in machine translation refers to the criteria upon which the MT system is designed. There are two basic MT approaches: rule-based MT (RBMT) and corpus-based MT (CBMT). RBMT is divided into three main subcategories, namely transfer-based, interlingua-based, and dictionary-based MT. CBMT is also dichotomized into statistical machine translation (SMT) and example-based machine translation (EBMT). A further approach is hybrid MT, in which there exists a kind of combination between other approaches mentioned previously.

Rule-based MT (RBMT) is essentially based upon various types of linguistic rules (Baker & Saldanha, 2009). This approach necessitates analysis and representation of the meaning of ST and the generation of equivalent TT. Representations should be unambiguous lexically and structurally (Homiedan, 1998). The direct and indirect approach are two major paths taken in the development of RBMT systems (Baker & Saldanha, 2009).

Systems developed before the 1980s largely adopted the direct approach. Such systems work between language pairs on the basis of bilingual dictionary entries and morphological analysis. They translate the original work word by word, without much detailed analysis of the syntactic structures of the input text or of the correlation of meaning between words, and then make some rudimentary adjustments to the TT in accordance with the morphological and syntactic rules of the TL (Baker & Saldanha, 2009).

During the 1980s, the indirect approach became the dominant framework in MT design. Translation engines using this approach analyze the syntactic structure of a text, usually creating an intermediary and abstract representation of ST’s meaning, and generating from it the TT (Baker & Saldanha, 2009).

Transfer-based MT consists of three basic stages: parsing an input sentence into a formal meaning representation that still retains the deep structure characteristics of the ST; ‘transferring’, converting the ST formal representation into one that carries the deep structure characteristics of the TL; and generating a target sentence from the transferred meaning representation (Baker & Saldanha, 2009).

In interlingua MT, an abstract representation of the original’s meaning is created using an interlingua or pivot language, i.e. an (ideally) SL/TL independent representation, from which TTs in several different languages can potentially be produced. Translation thus consists of two basic stages: an analyzer transforms the ST into the interlingua and a generator transforms the interlingua representation into the TL (Baker & Saldanha, 2009).

A variant of interlingual MT is knowledge-based MT (KBMT), which produces semantically accurate translations but mostly requires, for the purpose of disambiguation, massive acquisition of various types of knowledge, including non-linguistic knowledge related to the domains of the texts to be translated and general knowledge about the real world (Baker & Saldanha, 2009).

The third subcategory of rule-based MT approach is the method known as dictionary-based MT. Machine translation may use a method on the basis of dictionary entries, i.e. the words will be translated as they are by a dictionary (n/a, 2008). This approach corresponds to ‘word-for-word’ translation method and fails to render an accurate target text, as translation is a process more than substitution of words with their corresponding dictionary-based equivalents in a different language.

“With more and more text being available in electronic form, it is becoming relatively easy to obtain digital texts together with their translations” (Erjavec, 2003, p. 93). A corpus is a large body of texts. Corpora exist in machine readable form, written texts or recorded speech, but increasingly the word corpus is used to refer to the machine readable variety (McEnery & Wilson, 1993). Corpora are either bilingual (parallel corpora) or multilingual. CBMT is the approach developed on the basis of using corpora in machine translation.

CBMT automatically acquires the translation knowledge or models from bilingual corpora (Chéragui, 2012). CBMT is split into two categories: statistical MT and example-based MT. In statistical machine translation (SMT), words and phrases in a bilingual parallel corpus are aligned as the basis for a translation model of word–word and phrase–phrase frequencies. Translation includes the selection, for each input word, of the most probable words in the TL, and the determination of the most probable sequence of the selected words on the basis of a monolingual language model (Baker & Saldanha, 2009).

Example-based MT (EBMT) systems use bilingual parallel corpora as their main knowledge base. In this approach, translation is produced by comparing the input with a corpus of typical translated examples, extracting the closest matches and using them as a model for the TT. Translation is therefore done in three stages: matching, which involves finding matches for the input in the parallel corpus; alignment, which involves identifying which parts of the corresponding translation are to be reused; and recombination, which involves putting together those parts of the examples to be used in a legitimate way (Baker & Saldanha, 2009).

Some recent works have focused on hybrid approaches which combine the transfer approach with one of the corpus-based approaches. Hybrid approach was designed to work with fewer amounts of resources and depend on the learning and training of transfer rules. The main idea in this approach is to automatically learn syntactic transfer rules from restricted amounts of word-aligned data. This data contains all the necessary information for parsing, transfer, and generation of the sentences (Chéragui, 2012).

Lexicalist MT, connectionist MT, and principles-based MT are three further MT approaches. The essence of the lexicalist approach in MT is to reduce transfer rules to simple bilingual lexical equivalencies. The need for structural representations (common to transfer and interlingua approaches) is abandoned in favor of sets of semantic constraints on lexical items. Translation involves to identify the TL lexical items which satisfy the semantic constraints attached to the SL lexical equivalents. The bag of receptor lexical items is shaken to generate the output text conforming to the syntax and semantics of the receptor language (Homiedan, 1998).

Connectionism, parallel to computation, is a significant development in the computational modeling of cognition. A distinctive feature is the computation of the strengths of links between nodes of networks and the adjustment of the weightings as a consequence of analyses, i.e. the network learns about the links and their strengths for later uses. Another attractive feature is the possibility of computation of alternative analysis in parallel (Homiedan, 1998).

The third approach is principles-based MT, concerning the MT research in the framework of the principles and parameters approach, manifested in syntax in the Government-Binding Theory. The basic premise is that there exist universal principles which hold across all languages. Distinction between languages is considered by different settings of the parameters, syntactic and lexical-semantic (Homiedan, 1998).


There are two basic types of architecture for machine translation, which are linguistic and computational. There are three main approaches in the linguistic architecture used for developing MT systems which differ in their complexity and sophistication. They include direct approach, transfer-based approach, and interlingua approach. Computational architecture, on the other hand, involves RBMT and CBMT with their corresponding subcategories (Chéragui, 2012).

2.2.4.MT Types

“In more than five years of research and development, machine translation has evolved into a truly practical system which is now beginning to build its foundation in the world market” (Homiedan, 1998, p. 9). The following are four types of Machine Translation mentioned by the same author (1998):

  1. MT for watchers (MT-W): This is intended for readers who seek access to some information written in foreign language, and are also ready to accept possible bad ‘rough’ translations rather than nothing. This was the type envisaged by the pioneers. MT-W emerged with the need to translate military technological documents. It was almost the dictionary-based translation far away from linguistics-based MT.
  2. MT for revisers (MT-R): This type aims to produce raw translation automatically with a quality comparable to that of the first drafts produced by human. The translation output can be regarded only as brush-up so that the professional translator freed from that very boring and time-consuming task can be promoted to a revisor.
  3. MT for translators (MT-T): This aims at helping human translators do their job by providing online dictionaries, thesaurus, and translation memories. It is usually incorporated into the translation workstations and the PC-based translation tools.
  4. MT for authors (MT-A): This aims at authors deciding to have their texts translated into foreign languages and accepting to write under control of the system or to help the system disambiguate utterances so that satisfactory translation can be obtained without any revision.

2.2.5.Controlled Language and Sublanguage

Controlled language (CL) is a specially simplified version of a human language (Shuttleworth & Cowie, 2014). Through the use of CL, texts grow easier to read and understand. This enhances the efficiency and accuracy of the tasks associated with technical documentation, and improves the quality of translations. CL is used as a guideline for authoring, with self-imposed conformance on the part of the writer; CL is used with software performing a complete check of each new text to verify conformance; and CL is incorporated into a system for automatic MT of technical text. A typical goal of CL is to reduce the number of words used, and an adherence to the principle of one-to-one correspondence between word forms and concepts. Its further basic goal is to reduce or eliminate the use of ambiguous and complex sentence structures (Somers, 2003).

CLs are either human-oriented or machine-oriented. Human-oriented CLs seek to improve text comprehension by humans. Machine-oriented CLs try to improve text comprehension by computers. Instances of restrictions on writing which aid both humans and computers are the limitation of sentence length and the obligatory use of commas between conjoined sentences. A general difference is that writing rules for machine-oriented CLs must be precise and computationally tractable, e.g. ‘Do not use sentences larger than 20 words’. Writing rules efficient for human-oriented CLs may be computationally intractable or intentionally vague, e.g. ‘Make your instructions as specific as possible’, or ‘Present new and complex information slowly and carefully’ (Somers, 2003).

The term sublanguage dates back to Zellig Harris, the structuralist linguist, who gave a precise characterization of the idea in terms of his linguistic theory. A sublanguage arises when a community of users (domain specialists) communicate among themselves. They develop their own vocabulary, i.e. not only specialist terms having no meaning to outsiders, but also everyday words are given narrower interpretations, corresponding to the concepts which characterize and define the domain. In addition, there will be a favored style of writing or speaking, with preferred grammatical usages (Somers, 2003).

A sublanguage approach to MT benefits from two main characteristics of sublanguage as compared to the whole language, namely the reduced necessity of coverage in the lexicon and grammar. Unlike CLs, where the restrictions may be motivated by the MT system’s limitations, with sublanguages, the restrictions occur naturally. One of the advantages claimed for the sublanguage approach is that cross-lingually there are similarities which may be exploited (Somers, 2003).

2.2.6.Pre-Editing and Post-Editing

Pre-editing and post-editing are two other concepts discussed in the field of machine translation. The modern concepts of pre- and post- editing stem from the fact that the linguistic or contextual meaning of a text is beyond the machine’s recognition. Pre-editing is done by examining different grammatical constructions, specifying parts of speech, giving the range of a given phrase, and inserting a mark to break up sentences. Post-editing removes actual errors made by the machine and renders a grammatically acceptable text (Homiedan, 1998).

Pre-editing refers to the process of preparing an ST for translation by an MT system. Most of such systems find it almost impossible to analyze the language which is even slightly convoluted. Hence it is frequently necessary to simplify, clarify and disambiguate the grammar and vocabulary of the text to be translated, in order to ensure that the output is of a reasonable quality. Such a pre-editing may take the form of rewriting the text in a CL, or simply shortening sentences, reducing the number of subordinate clauses and adding explicating words like conjunctions. Pre-editing may be semi-automated as some systems have a critiquing facility, indicating the points where the input needs to be rewritten (Shuttleworth & Cowie, 2014).

The task of the post-editor is basically to edit, modify and correct pre-translated text which has been processed by an MT system from an SL into a TL. One of the primary reasons to consider the use of MT, inciting the need for subsequent treatment of MT output texts by way of post-editing, comes from the constantly increasing focus on globalization. Another factor is the change in expectations regarding the type and quality of translated material. Post-editors constantly struggle with the issue of the quantity of elements to change, while keeping the TT at a sufficient level of quality (Somers, 2003).

2.2.7.Some Other Significant Points

According to J. Hutchins (2001), until recently, spoken language was outside the range. Spoken language translation (SLT) combines two computational tasks: speech understanding and translation. The first task includes extracting from an acoustic signal the relevant bits of sound which can be interpreted as speech (i.e. ignoring background noise as well as vocalizations which are not speech as such), correctly identifying the individual speech sounds (phonemes) and the words that they comprise and then filtering out distractions like hesitations, repetitions, false starts, incomplete sentences, etc., to give a coherent text message. All this then should be translated, a task quite different from translating written texts, since often it is the content rather than the form which is paramount (Somers, 2003).

With the rapid development of the Internet, MT vendors are collaborating with Internet service/content providers to offer on demand online translation services, with human post-editing as other options. Today, many Internet portals offer free online MT services. The demand for online translation has given a great impetus to the development of MT systems. The need to translate Internet content has prompted many stand alone PC-based MT software developers to incorporate in their products the function of translating webpages and email messages. Moreover, by providing a large number of customers with easy access to multiple translation engines with a free or trial use basis, MT developers are able to engage an unprecedented number of people in the testing of MT systems, which will certainly help improve the systems’ quality over time and promote the need for research and development in the field (Baker & Saldanha, 2009). Millions of people a day accept the [Translate this page] option on web browsers and face its immediate gain and benefit (Wilks, 2009).


Broadly speaking, translation necessitates at least two kinds of knowledge: linguistic and extra-linguistic. Depending on this categorization, problems in MT are classified into linguistic and extra-linguistic ones. The treatment of extra-linguistic problems is more difficult because their knowledge is harder to codify. Linguistic problems in MT are primarily caused by the inherent ambiguities of natural languages and by the lexical or structural mismatches between different languages.There are two kinds of ambiguity: lexical and structural. Lexical ambiguity is mostly caused by polysemy and homonymy. Structural or grammatical ambiguity arises where different constituent structures (underlying structures) are assigned to one construction (surface structure). Common cases are alternative structures and uncertain anaphoric reference. Alternative structures are constructions presenting two or more possible interpretations but presuppose that only one is true.Uncertain anaphoric reference occurs when an expression can refer back to more than one antecedent (Baker & Saldanha, 2009).

SL and TL mismatches (known as cases of non-correspondence or transfer problems) arise from lexical and structural differences between languages. Lexical mismatches are due to differences in how languages classify the world. Structural mismatches take place when different languages use different structures for the same purpose or the same structure for different purposes (Baker & Saldanha, 2009).

One of MT’s major pitfalls is its inability to translate non-standard language as accurately as standard language. Named entities (MT’s further pitfall) mean concrete or abstract entities in the real world such as people, organizations, companies, places etc. It also refers to expressing of time, space, quantity, and so on. The initial difficulty in dealing with named entities is simply their identification in the text. If named entities are not recognized by the MT system, they may be erroneously translated as common nouns, which will change the text’s human readability. It is also possible that, if not identified, named entities will be omitted from the TT, which will have implications for the text’s readability and message (n/a, 2008).


This paper is going to investigate machine translation according to the map suggested by James S. Holmes in 1972 for translation studies. This is done through four steps. The first two steps are to render some details concerning Holmes’ map and machine translation respectively. MT and its various dimensions are then evaluated in ten sections, each involving one part of the map. Finally, the points found in these ten sections are compared and concluding points regarding the research’s subject are made.

4.Data Analysis

The domain of translation studies is vast enough to allow any concept or phenomenon more or less related to the notion of translation to be covered by it. This is true for MT as well. MT is directly related to the notion of translation and includes a large number of its aspects. It has been discussed by many translation scholars. It is therefore concluded that MT, considered as a theory of translation, can be studied and evaluated according to the map rendered by James S. Holmes (1972) for translation studies.

4.1.Pure Branch of Translation Studies (Product-Oriented DTS)

MT’s product is a text which is expected to have a quality as high as a text translated by a human translator. Such an expectation is more logical when a human translator is involved in the process of pre-editing or post-editing or both. Seeking an accurate translation, the client may compare different translations of the same source text by different systems using different approaches and select the best one. For example, he may compare translations of text A by systems M and N (the former based on RBMT and the latter on CBMT), and select the text which fits his expectations and requirements.

MT’s products can be used by linguists for linguistic studies. Several texts having various linguistic elements (homonyms and polysemous words, ambiguous words and structures, alternative structures, uncertain anaphoric references, concrete/abstract or countable/noncountable nouns, different verb tenses, etc.) can be given to a system based on a specific architecture and compare its outputs. This is an instance of a synchronic study. MT’s products can, however, be used in diachronic studies. The translated texts produced by different systems designed throughout the history can be compared to see which design was better than the remaining ones.

4.2.Pure Branch of Translation Studies (Function-Oriented DTS)

The function which is supposed for the translated text in the target sociocultural context plays a crucial role in choosing MT system to render the target text. If the text is to be used by a member of public society, a generic MT system is used. If the text’s subject belongs to a particular field of study and is intended for a particular profession, a special purpose system is selected. If the text is to be used for its content, human-assisted MT is preferable. If it is intended to be used in an investigation, e.g. in the investigation of MT’s linguistic strengths and weaknesses, no human assistance should be involved.

4.3.Pure Branch of Translation Studies (Process-Oriented DTS)

This area deals with the procedures followed inside the translator’s mind while converting a source text into a receptor text. If scientific advancements allow the scientists to get aware of such procedures and enable them to design a schedule imitating them, a great progress will be made in MT. Such a schedule being designed, it will be a good suggestion to design similar schedules for computers so that system’s CPU follows the same procedures. If such a hypothesis comes true, the boundaries between MT and HAMT will be declined and even removed.

It is also very helpful to design the process followed inside the human translator’s mind while pre- or post-editing. If this comes true, the quality of MT output will be improved. The system receives the text translated by itself and edits it like a human translator. The system can also edit the text before starting to translate it. Both of them results in a higher-quality output.

4.4.Pure Branch of Translation Studies (Medium-Restricted THTS)

Human and computer are two distinct translation mediums which are conjoined in the forms of MAHT and HAMT. MT is used in both oral and written translation. It sounds that written translation is less difficult for MT systems, since oral translation (particularly in public settings like conferences and offices) involves several non-linguistic elements such as gesture and body language, facial expressions, eye contact, clothing, etc.

It was pointed out that MT systems follow either RBMT or CBMT and their corresponding categories in their activities. These approaches indicate the mediums through which an application accesses the database required for a translation task. The mediums in RBMT are linguistic rules and properties such as rules of linguistic schools, dictionary-based databases, and transfer rules. The mediums in CBMT include bilingual or multilingual corpora based on either statistical rules or sets of examples. The medium in hybrid approach is a combination of different mediums from a variety of approaches.

4.5.Pure Branch of Translation Studies (Area-Restricted THTS)

MT systems are either bilingual or multilingual. Bilingual MT systems are language-pair restricted. Multilingual MT systems may or may not be language-group restricted. If the languages supported by an MT system are of the same language family, the system is language-group restricted. It is unlikely to evaluate an MT system based on culture restrictions, as computer and its applications are associated with language (a concrete reality) rather than with culture (an abstract reality which is beyond physical manifestation in words and sentences).

Based on the language direction and the number of languages involved, it is possible to investigate MT’s history. Most of the earlier systems were language-pair restricted and unidirectional. Multilingual systems were outside this period. After decades of research and development, it is now possible to find multilingual and bidirectional systems. The assessment of MT based on culture restrictions is located in the future of research in MT.

4.6.Pure Branch of Translation Studies (Rank-Restricted THTS)

The category of rank-restricted theories is closely related to the notions of CL and sublanguage. A CL makes a variety of changes in different linguistic ranks of a language. These changes include various lexical and grammatical modifications. Sublanguage is also related to the concept of rank. The distinction between a sublanguage and the common language may be based on differences in the use of various linguistic ranks. A language may express a subject in a single long sentence, whereas its sublanguage prefers to use several short sentences for stating the same point.

4.7.Pure Branch of Translation Studies (Text-Type Restricted THTS)

Taking a glance at the history of MT reveals that the need for translation of a particular text-type has frequently been the main motivation of researchers to develop useful MT engines. Its starting point was the translation of military documents. In other periods, such other genres as technical, scientific, religious, etc. genres were the subject of MT research.

The four basic types of machine translation can be attributed to different types of texts. MT-W is used for texts whose contents are so important that they should be translated immediately, even with the expense of low quality. Military and political documents and texts related to society’s health and security are instances of such text-types. MT-R is used for any text-type given to professional translators for less amounts of time, cost, and energy. MT-T contributes to the improvement of MT itself by providing databases in the form of computerized texts. The fourth type, MT-A, is helpful to the industry of translation regarding any genre.

4.8.Pure Branch of Translation Studies (Time-Restricted THTS)

Many dimensions of MT history can be examined according to time-restricted theory part of Holmes’ map. A large number of developments in the history of MT, including rule-based and corpus-based MT, controlled language and sublanguage, pre-editing and post-editing, different MT types, SLT, online MT services, and many other dimensions can be studied regarding their histories and different times at which developments have been made.

4.9.Pure Branch of Translation Studies (Problem-Restricted THTS)

Machine translation, like many other concepts, suffers from several problems. They are either linguistic or extra-linguistic. In order to assess MT based on problem-restricted theories, one of several problems of MT is selected and its influence on MT and its output is investigated. For instance, the researcher can select such a subject for his research as the effects of computers’ inability to translate non-standard language on the quality of MT output.

This restriction is also used in a comparative way. The researcher may choose two aspects of MT to compare their problems against each other and determine the better one. The researcher can compare problems of different MT systems developed throughout MT history, of RBMT and CBMT and their corresponding subcategories, of linguistic and computational architectures, of four basic MT types, of CL and sublanguage, of pre-editing and post-editing, and so on, against each other.

4.10.Applied Branch of Translation Studies

Machine translation is a very helpful device in translator training. The teacher, during his teaching, has to show various instances of translated texts. Such a task will be faster and easier via MT. MT can be used by teachers to show differences between human’s and computer’s skills and capacities. An MT system usually contains a large number of dictionaries and/or translated texts stored. These will help teachers and students look up words in dictionaries and distinguish grammatical structures from ill-formed ones.

Machine translation is a great help for translators. It makes all types of lexicographical and terminological reference works available to the translators. Using translation aids via MT has many advantages. It is faster and easier to look for the information required. It provides up-to-date information for the translators. It permits simultaneous search of several dictionaries. Computerized translation aids occupy less physical space and are easier to be carried and kept.

It was pointed out that one of the translator’s tasks is the establishment of translation policy. This task involves MT as well. A translator should determine the crucial role played by MT in the society. He should indicate that MT facilitates translation, accelerates communications, and causes more and more research and development in such industries as computer, artificial intelligence, and the Internet. Translation policy involves the attempts to prove that machine translation is not a rival or enemy to human translators.

Finally, applied branch of Holmes’ map involves the area of translation criticism. This mostly involves attention to the failures and mistakes occurring in a translated text and trying to make the translator aware of them. Such a capacity can be provided for MT systems as well. It is a very helpful ability for an MT system to receive a text and its translation, evaluate the translation, compare it against the source text, identify the translation errors defined by the discipline of translation or by the criteria upon which it works, and show them to the translator with suggested correct translations and appropriate solutions to avoid their repetition. This will improve the status of translation and contribute to training more competent professional translators.


The map rendered by Holmes in 1972 has been an extremely influential scheme in the history of translation studies. This map is so comprehensive that covers any phenomenon related, to some degree, to the task of translation. Any translation theory given by any translation scholar, from Cicero and Saint Jerome in the ancient time to such more modern period figures as Baker, Nord and Even-Zohar, can be assessed by his map. The same is true for translation-related practices like interpretation, localization, machine translation, and so forth.

Machine translation (MT), as defined by its name, is the process of using a computer device to transfer a text in the source language into another text in the receptor language. It is a connection between two disciplines: translation and computer. This means that MT is interdisciplinary by nature. Both computer and translation are themselves interdisciplinary. Computer deals with mathematics, language, electricity, etc. Translation is associated with linguistics, history, literature, etc. Thus MT covers a large number of disciplines and fields of studies.

The distinction between MT, HAMT and MAHT is function-oriented, process-oriented, and medium-restricted. The distinction between generic and special-purpose MT is function-oriented. The language number involved in MT is area-restricted. MT’s history is product-oriented, area-restricted, text-type restricted, and time-restricted. Its approaches are product-oriented and medium-restricted. Its architectures are product-oriented. Its types are text-type restricted. CL and sublanguage are rank-restricted. Pre-editing and post-editing are product-oriented and process-oriented. SLT is medium-restricted. Problem-restriction is assigned to all aspects of MT. Finally, the distinction between MT, MAHT and HAMT, and the notion of MT’s problems are attributed to applied branch of Holmes’ map.

The last concluding point indicates that MT can be assessed according to all categories of Holmes’ map. This verifies Holmes’ statement that a single theory can be restricted in more than one way. MT’s various dimensions, from history to its drawbacks, are assessed according to all categories of the map. It is examined on the basis of both pure and applied branches.


Baker, M., & Saldanha, G. (Eds.). (2009). Routledge Encyclopedia of Translation Studies. New York: Routledge.

Chéragui, M. A. (2012). Theoretical overview of machine translation. Paper presented at the Proceedings ICWIT.

Delavenay, É. (1960). An Introduction to Machine Translation. London: Thames and Hudson.

Denkowski, M. (2015). Machine Translation for Human Translators. (Ph.D Doctoral Dissertation), Carnegie Mellon University.  

Erjavec, T. z. (2003). Compilation and exploitation of parallel corpora. CIT, Journal of Computing and Information Technology, 11(2), 93-102.

Homiedan, A. H. (1998). Machine translation. Journal of King Saud University, Language & Translation, 10, 1-21.

Hutchins, J. (2001). Machine translation and human translation: in competition or in complementation? International Journal of Translation, 13(1-2), 5-20.

Hutchins, W. J. (1995). Machine translation: A brief history. In E. F. K. K. a. R.E.Asher (Ed.), Concise history of the language sciences: from the Sumerians to the cognitivists (pp. 431-445): Oxford: Pergamon Press.

Khalilizadeh, M. (2016). A comparative study on translation of homonyms done by translation machines (A Case study of Google Translate and I’m translator websites). Paper presented at the National Conference on Translation and interdisciplinary Studies, Birjand University, Birjand, Iran.

McEnery, T., & Wilson, A. (1993). Corpora and Translation: Uses and Future Prospects: UCREL.

Munday, J. (2008). Introducing Translation Studies: Theories and Applications (2 ed.). USA and Canada: Routledge.

n/a. (2008). Machine translation Retrieved from

n/a. (2016). James S. Holmes Retrieved from

Shuttleworth, M., & Cowie, M. (2014). Dictionary of Translation Studies. New York: Routledge.

Somers, H. (Ed.) (2003). Computers and translation : a translator’s guide (Vol. 35). AmsterdamandPhiladelphia: John Benjamins Publishing Company.

Venuti, L. (Ed.) (2004). The Translation Studies Reader. USA and Canada: Routledge.

Wilks, Y. (2009). Machine Translation: Its Scope and Limits: Springer.

Log in

Log in