Some Corpora Available Online Posted on October 5, 2020 By Ardian Setiawan CHAINS: Characterising Individual Speakers CHAINS is a research project funded by Science Foundation Ireland from April 2005 to March 2009. Its goal is to advance the science of speaker identification by investigating those characteristics of a persons speech that make them unique. The Blog Authorship Corpus The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words – or approximately 35 posts and 7250 words per person. Old Bailey Corpus The proceedings of the Old Bailey, London’s central criminal court, were published from 1674 to 1913 and constitute a large body of texts from the beginning of Late Modern English. The Proceedings contain over 200,000 trials, totalling ca. 134 million words and its verbatim passages are arguably as near as we can get to the spoken word of the period. The material thus offers the rare opportunity of analyzing everyday language in a period that has been neglected both with regard to the compilation of primary linguistic data and the description of the structure, variability, and change of English. CoRD | Corpus of Early English Medical Writing (CEEM) The Corpus of Early English Medical Writing is a corpus of English vernacular medical writing. Consisting of three diachronically divided subcorpora, the corpus covers the entire history of medical writing in English from the earliest manuscripts to the beginning of modern clinical medicine. The York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE) The York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE) is a 1.5 million word syntactically-annotated corpus of Old English prose texts. As a sister corpus to the Penn-Helsinki Parsed Corpus of Middle English (PPCME2), it uses the same form of annotation and is accessed by the same search engine, CorpusSearch. The YCOE was created with a grant from the English Arts and Humanities Research Board (B/RG/AN5907/APN9528). The corpus itself (the annotated text files) is distributed by the Oxford Text Archive and can be obtained from them free of charge for non-commercial use. ARCHER Corpus (The University of Manchester) ARCHER is a multi-genre corpus of British and American English covering the period 1650-1990, first constructed by Douglas Biber and Edward Finegan in the 1990s. It is managed as an ongoing project by a consortium of participants at fourteen universities in seven countries. Santa Barbara Corpus of Spoken American English The Santa Barbara Corpus of Spoken American English is based on a large body of recordings of naturally occurring spoken interaction from all over the United States. The Santa Barbara Corpus represents a wide variety of people of different regional origins, ages, occupations, genders, and ethnic and social backgrounds. The predominant form of language use represented is face-to-face conversation, but the corpus also documents many other ways that that people use language in their everyday lives: telephone conversations, card games, food preparation, on-the-job talk, classroom lectures, sermons, story-telling, town hall meetings, tour-guide spiels, and more. The Newcastle Electronic Corpus of Tyneside English (NECTE) The Newcastle Electronic Corpus of Tyneside English (NECTE) is a corpus of dialect speech from Tyneside in North-East England. It is based on two pre-existing corpora, one of them collected in the late 1960s by the Tyneside Linguistic Survey (TLS) project, and the other in 1994 by the Phonological Variation and Change in Contemporary Spoken English (PVC) project. NECTE amalgamates the TLS and PVC materials into a single Text Encoding Initiative (TEI)-conformant XML-encoded corpus and makes them available in a variety of aligned formats: digitized audio, standard orthographic transcription, phonetic transcription, and part-of-speech tagged. This website describes the NECTE corpus in detail, and makes it available to academic researchers, educationalists, the media in non-commercial applications, and organisations such as language societies and individuals with a serious interest in historical dialect materials. The Limerick Corpus of Irish English The Limerick Corpus of Irish English (L-CIE) has been developed by the University of Limerick in conjunction with Mary Immaculate College, Limerick. This one-million word spoken corpus of Irish English discourse includes conversations recorded in a wide variety of mostly informal settings throughout Ireland. The corpus is a collection of naturally occurring spoken data from everyday Irish contexts. There are currently 375 transcripts (totaling over 1,000,000 words) available at this site. American National Corpus The American National Corpus (ANC) project is creating a massive electronic collection of American English, including texts of all genres and transcripts of spoken data produced from 1990 onward. The ANC will provide the most comprehensive picture of American English ever created, and will serve as a resource for education, linguistic and lexicographic research, and technology development. Great Britain (ICE-GB) @ ICE-corpora.net The British component of ICE is based at the Survey of English Usage, University College London. The British ICE corpus (ICE-GB) was released in 1998 and is now available. The corpus is POS-tagged and parsed. WebCorp: The Web as Corpus WebCorp LSE is a fully-tailored linguistic search engine to cache and process large sections of the web. WaCKy “We are a community of linguists and information technology specialists who got together to develop a set of tools (and interfaces to existing tools) that will allow linguists to crawl a section of the web, process the data, index and search them. We try to keep everything very laid-back and flexible (minimal constraint on data representation, programming language, etc.) to make it easier for people with different backgrounds and goals to use our resources and/or contribute to the project. We built a few corpora you can download, and in the near future we’ll have a web interface for direct online use of the corpora. VISL – Corpus Eye VISL’s grammatical and NLP research are both largely corpus based. On the one hand, VISL develops taggers, parsers and computational lexica based on corpus data, on the other hand these tools – once functional – are used for the grammatical annotation of large running text corpora, often with or for external partners (project list 1999-2009. The main methodological approach for automatic corpus annotation is Constraint Grammar (CG), a word based annotation method. TIME Magazine Corpus of American English This website allows you to quickly and easily search more than 100 million words of text of American English from 1923 to the present, as found in TIME magazine. You can see how words, phrases and grammatical constructions have increased or decreased in frequency and see how words have changed meaning over time. Regex Dictionary by Lou Hevly The Regex Dictionary is a searchable online dictionary, based on The American Heritage Dictionary of the English Language, 4th edition, that returns matches based on strings —defined here as a series of characters and metacharacters— rather than on whole words, while optionally grouping results by their part of speech. Michigan Corpus of Academic Spoken English >> ELI Corpora & UM ACL The Michigan Corpus of Academic Spoken English (MICASE) is a collection of nearly 1.8 million words of transcribed speech (almost 200 hours of recordings) from the University of Michigan (U-M) in Ann Arbor, created by researchers and students at the U-M English Language Institute (ELI). MICASE contains data from a wide range of speech events (including lectures, classroom discussions, lab sections, seminars, and advising sessions) and locations across the university. LDC – Linguistic Data Consortium New Corpora at the LDC include: Indian Language Part-of-Speech Tagset: Bengali ~100K words of manually annotated Bengali text Message Understanding Conference 7 Timed (MUC7_T) ~ timed annotation for named entities Asian Elephant Vocalizations ~57.5 hours of audio recordings of vocalizations by Asian Elephants NIST 2005 Open Machine Translation (OpenMT) Evaluation ~ source data, reference translations, and scoring software used in the NIST 2005 OpenMT evaluation TRECVID 2006 Keyframes & Transcripts ~keyframes extracted from English, Chinese, and Arabic broadcast programming Linas’ collection of NLP data Here is a collection of linguistic data, including a collection of parsed texts from Voice of America, Project Gutenberg, the simple English Wikipedia, and a portion of the full English Wikipedia. This data is the result of many CPU-years worth of number-crunching, and is meant to provide pre-digested input for higher order linguistic processing. Two types of data are provided: parsed and tagged texts, and large SQL tables of statistical correlations. The texts were dependency parsed with a combination of RelEx and Link Grammar, and are marked with both dependencies (subject, object, prepositional relations, etc.), with features (part-of-speech tags, verb-tense and noun-number tags, etc., with Link Grammar linkage relations, and with phrasal constituency structure. LexChecker LexChecker is a web-based corpus query tool that shows how English words are used. Users submit a word into the query box (like a Google search) and LexChecker returns a list of the patterns in which the word is typically used. Each pattern listed for a word is linked to sentences from the British National Corpus (BNC) that show the word occurring in that pattern. The patterns are what we have dubbed ‘hybrid n-grams’. These are a uniquely useful form of corpus search result. They can consist of a string of words such as keep a close eye on or gain the upper hand. Or they could contain substitutable slots marked by specific parts of speech, for example run the risk of [v-ing] or stand [noun] in good stead or [verb] a storm of protest (as in raise/spark/cause/create/unleash a storm of protest). Forensic Linguistics Institute (FLI) Corpus of Texts Appeals, Blackmail and Extortion, Confessions, Death Row Final Statements, Declarations of War, Last Wills and Testaments, Miscellaneous, Statements by Police, Suicide Notes David Lee’s Corpus-based Linguistics LINKS “These annotated links (c. 1,000 of them) are meant mainly for linguists and language teachers who work with corpora, not computational linguists/NLP (natural language processing) people, so although the language-engineering-type links here are fairly extensive, they are not exhaustive…” CORPORA List The CORPORA list is open for information and questions about text corpora such as availability, aspects of compiling and using corpora, software, tagging, parsing, bibliography, conferences etc. The list is also open for all types of discussion with a bearing on corpora. Corpora for Language Learning and Teaching [bnc] British National Corpus The British National Corpus (BNC) is a 100 million word collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of current British English, both spoken and written. AUE: The alt.usage.english Home Page This is the web site of the alt.usage.english newsgroup. Contains audio archive. Project We Say Tomato Corpus of Historical American English (COHA) COHA allows you to quickly and easily search more than 400 million words of text of American English from 1810 to 2009 (see details on corpus composition). You can see how words, phrases and grammatical constructions have increased or decreased in frequency, how words have changed meaning over time, and how stylistic changes have taken place in the language. Speech Accent Archive The speech accent archive uniformly presents a large set of speech samples from a variety of language backgrounds. Native and non-native speakers of English read the same paragraph and are carefully transcribed. The archive is used by people who wish to compare and analyze the accents of different English speakers. IDEA – The International Dialects Of English Archive IDEA was created in 1997 as a free, online archive of primary source dialect and accent recordings for the performing arts. Its founder and director is Paul Meier, author of the best-selling Accents and Dialects for Stage and Screen, a leading dialect coach for theatre and film, and a specialist in accent reduction. American Rhetoric: The Power of Oratory in the United States Database of and index to 5000+ full text, audio and video versions of public speeches, sermons, legal proceedings, lectures, debates, interviews, other recorded media events, and a declaration or two. The AMI Meeting Corpus The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings.
Uncategorized Melawan keburukan Posted on February 28, 2017August 9, 2018 Jangan melawan keburukan dengan keburukan. Lawan keburukan dengan kebaikan. He who fights with monsters should look to it that he himself does not become a monster – Friedrich Nietzsche Read More
Uncategorized Bibliographic/Literature Mapping Tools Posted on March 4, 2023March 4, 2023 Mapping of literature is an important part of research. Here is a list of technological tools/apps for mapping of literature >> Bibliographic/Literature Mapping Tools Read More
Reflective thinking dan critical thinking, apa bedanya? Posted on July 12, 2021 Reflective thinking – memikirkan apa yang telah kita pelajari (atau baca). Dalam reflective thinking, kita berusaha untuk mengidentifikasi poin-poin penting dari informasi yang telah kita dapatkan. Tujuan utama reflective thinking adalah memahami topik yang kita pelajari. Critical thinking – mengevaluasi apa yang telah kita pelajari (atau baca). Untuk bisa melakukan… Read More