In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence").[2]Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] Three years later the billion-dollar AI industry began to collapse.
There were two major winters approximately 1974–1980 and 1987–2000,[3] and several smaller episodes, including the following:
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current (as of 2024[update]) AI boom.
Natural language processing (NLP) research has its roots in the early 1930s and began its existence with the work on machine translation (MT).[4] However, significant advancements and applications began to emerge after the publication of Warren Weaver's influential memorandum, Machine translation of languages: fourteen essays in 1949.[5] The memorandum generated great excitement within the research community. In the following years, notable events unfolded: IBM embarked on the development of the first machine, MIT appointed its first full-time professor in machine translation, and several conferences dedicated to MT took place. The culmination came with the public demonstration of the IBM-Georgetown machine, which garnered widespread attention in respected newspapers in 1954.[6]
Just like all AI booms that have been followed by desperate AI winters, the media tended to exaggerate the significance of these developments. Headlines about the IBM-Georgetown experiment proclaimed phrases like "The bilingual machine," "Robot brain translates Russian into King's English,"[7] and "Polyglot brainchild."[8] However, the actual demonstration involved the translation of a curated set of only 49 Russian sentences into English, with the machine's vocabulary limited to just 250 words.[6] To put things into perspective, a 2006 study made by Paul Nation found that humans need a vocabulary of around 8,000 to 9,000-word families to comprehend written texts with 98% accuracy.[9]
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. Another factor that propelled the field of mechanical translation was the interest shown by the Central Intelligence Agency (CIA). During that period, the CIA firmly believed in the importance of developing machine translation capabilities and supported such initiatives. They also recognized that this program had implications that extended beyond the interests of the CIA and the intelligence community.[6]
At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".[10]
However, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An apocryphal[11] example is "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten."[12] Later researchers would call this the commonsense knowledge problem.
By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee (ALPAC) to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.[2][10]
Machine translation shared the same path with NLP from the rule-based approaches through the statistical approaches up to the neural network approaches, which have in 2023 culminated in large language models.
The failure of single-layer neural networks in 1969
Simple networks or circuits of connected units, including Walter Pitts and Warren McCulloch's neural network for logic and Marvin Minsky's SNARC system, have failed to deliver the promised results and were abandoned in the late 1950s. Following the success of programs such as the Logic Theorist and the General Problem Solver,[13] algorithms for manipulating symbols seemed more promising at the time as means to achieve logical reasoning viewed at the time as the essence of intelligence, either natural or artificial.
Interest in perceptrons, invented by Frank Rosenblatt, was kept alive only by the sheer force of his personality.[14]
He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".[15]
Mainstream research into perceptrons ended partially because the 1969 book Perceptrons by Marvin Minsky and Seymour Papert emphasized the limits of what perceptrons could do.[16] While it was already known that multilayered perceptrons are not subject to the criticism, nobody in the 1960s knew how to train a multilayered perceptron. Backpropagation was still years away.[17]
Major funding for projects neural network approaches was difficult to find in the 1970s and early 1980s.[18] Important theoretical work continued despite the lack of funding. The "winter" of neural network approach came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large scale interest.[19] Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.[15]
In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives". He concluded that nothing being done in AI could not be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions.[20]
The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory.[21] McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning".[22]
The report led to the complete dismantling of AI research in the UK.[20] AI research continued in only a few universities (Edinburgh, Essex and Sussex). Research would not revive on a large scale until 1983, when Alvey (a research project of the British Government) began to fund AI again from a war chest of £350 million in response to the Japanese Fifth Generation Project (see below). Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.
DARPA's early 1970s funding cuts
During the 1960s, the Defense Advanced Research Projects Agency (then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with few strings attached. J. C. R. Licklider, the founding director of DARPA's computing division, believed in "funding people, not projects"[23] and he and several successors allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert A. Simon or Allen Newell) to spend it almost any way they liked.
This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research".[24] Pure undirected research of the kind that had gone on in the 1960s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study (the American Study Group) suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find.[24]
AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."[25] The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson [by] having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.[26]
While the autonomous tank project was a failure, the battle management system (the Dynamic Analysis and Replanning Tool) proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI[27] and justifying DARPA's pragmatic policy.[28]
In 1971, the Defense Advanced Research Projects Agency (DARPA) began an ambitious five-year experiment in speech understanding. The goals of the project were to provide recognition of utterances from a limited vocabulary in near-real time. Three organizations finally demonstrated systems at the conclusion of the project in 1976. These were Carnegie-Mellon University (CMU), who actually demonstrated two systems [HEARSAY-II and HARPY]; Bolt, Beranek and Newman (BBN); and System Development Corporation with Stanford Research Institute (SDC/SRI)
The system that came closest to satisfying the original project goals was the CMU HARPY system. The relatively high performance of the HARPY system was largely achieved through 'hard-wiring' information about possible utterances into the system's knowledge base. Although HARPY made some interesting contributions, its dependence on extensive pre-knowledge limited the applicability of the approach to other signal-understanding tasks.
DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at Carnegie Mellon University. DARPA had hoped for, and felt it had been promised, a system that could respond to voice commands from a pilot. The SUR team had developed a system which could recognize spoken English, but only if the words were spoken in a particular order. DARPA felt it had been duped and, in 1974, they cancelled a three million dollar a year contract.[30]
Many years later, several successful commercial speech recognition systems would use the technology developed by the Carnegie Mellon team (such as hidden Markov models) and the market for speech recognition systems would reach $4 billion by 2001.[31]
For a description of Hearsay-II see Hearsay-II, The Hearsay-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty and A Retrospective View of the Hearsay-II Architecture which appear in Blackboard Systems.[32]
Reddy gives a review of progress in speech understanding at the end of the DARPA project in a 1976 article in Proceedings of the IEEE.[33]
Contrary view
Thomas Haigh argues that activity in the domain of AI did not slow down, even as funding from DoD was being redirected, mostly in the wake of congressional legislation meant to separate military and academic activities.[34] That indeed professional interest was growing throughout the 70s. Using the membership count of ACM's SIGART, the Special Interest Group on Artificial Intelligence, as a proxy for interest in the subject, the author writes:[34]
(...) I located two data sources, neither of which supports the idea of a broadly based AI winter during the 1970s. One is membership of ACM's SIGART, the major venue for sharing news and research abstracts during the 1970s. When the Lighthill report was published in 1973 the fast-growing group had 1,241 members, approximately twice the level in 1969. The next five years are conventionally thought of as the darkest part of the first AI winter. Was the AI community shrinking? No! By mid-1978 SIGART membership had almost tripled, to 3,500. Not only was the group growing faster than ever, it was increasing proportionally faster than ACM as a whole which had begun to plateau (expanding by less than 50% over the entire period from 1969 to 1978). One in every 11 ACM members was in SIGART.
The setbacks of the late 1980s and early 1990s
The collapse of the LISP machine market
In the 1980s, a form of AI program called an "expert system" was adopted by corporations around the world. The first commercial expert system was XCON, developed at Carnegie Mellon for Digital Equipment Corporation, and it was an enormous success: it was estimated to have saved the company 40 million dollars over just six years of operation. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including software companies like Teknowledge and Intellicorp (KEE), and hardware companies like Symbolics and LISP Machines Inc. who built specialized computers, called LISP machines, that were optimized to process the programming language LISP, the preferred language for AI research in the USA.[35][36]
In 1987, three years after Minsky and Schank's prediction, the market for specialized LISP-based AI hardware collapsed. Workstations by companies like Sun Microsystems offered a powerful alternative to LISP machines and companies like Lucid offered a LISP environment for this new class of workstations. The performance of these general workstations became an increasingly difficult challenge for LISP Machines. Companies like Lucid and Franz LISP offered increasingly powerful versions of LISP that were portable to all UNIX systems. For example, benchmarks were published showing workstations maintaining a performance advantage over LISP machines.[37] Later desktop computers built by Apple and IBM would also offer a simpler and more popular architecture to run LISP applications on. By 1987, some of them had become as powerful as the more expensive LISP machines. The desktop computers had rule-based engines such as CLIPS available.[38] These alternatives left consumers with no reason to buy an expensive machine specialized for running LISP. An entire industry worth half a billion dollars was replaced in a single year.[39]
By the early 1990s, most commercial LISP companies had failed, including Symbolics, LISP Machines Inc., Lucid Inc., etc. Other companies, like Texas Instruments and Xerox, abandoned the field. A small number of customer companies (that is, companies using systems written in LISP and developed on LISP machine platforms) continued to maintain systems. In some cases, this maintenance involved the assumption of the resulting support work.[40]
Slowdown in deployment of expert systems
By the early 1990s, the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier in research in nonmonotonic logic. Expert systems proved useful, but only in a few special contexts.[41][42] Another problem dealt with the computational hardness of truth maintenance efforts for general knowledge. KEE used an assumption-based approach supporting multiple-world scenarios that was difficult to understand and apply.
The few remaining expert system shell companies were eventually forced to downsize and search for new markets and software paradigms, like case-based reasoning or universal database access. The maturation of Common Lisp saved many systems such as ICAD which found application in knowledge-based engineering. Other systems, such as Intellicorp's KEE, moved from LISP to a C++ (variant) on the PC and helped establish object-oriented technology (including providing major support for the development of UML (see UML Partners).
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth Generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. According to HP Newquist in The Brain Makers, "On June 1, 1992, The Fifth Generation Project ended not with a successful roar, but with a whimper."[40] As with other AI projects, expectations had run much higher than what was actually possible.[43][44]
In 1983, in response to the fifth generation project, DARPA again began to fund AI research through the Strategic Computing Initiative. As originally proposed the project would begin with practical, achievable goals, which even included artificial general intelligence as long-term objective. The program was under the direction of the Information Processing Technology Office (IPTO) and was also directed at supercomputing and microelectronics. By 1985 it had spent $100 million and 92 projects were underway at 60 institutions, half in industry, half in universities and government labs. AI research was well-funded by the SCI.[45]
Jack Schwarz, who ascended to the leadership of IPTO in 1987, dismissed expert systems as "clever programming" and cut funding to AI "deeply and brutally", "eviscerating" SCI. Schwarz felt that DARPA should focus its funding only on those technologies which showed the most promise, in his words, DARPA should "surf", rather than "dog paddle", and he felt strongly AI was not "the next wave". Insiders in the program cited problems in communication, organization and integration. A few projects survived the funding cuts, including pilot's assistant and an autonomous land vehicle (which were never delivered) and the DART battle management system, which (as noted above) was successful.[46]
AI winter of the 1990's and early 2000's
A survey of reports from the early 2000's suggests that AI's reputation was still poor:
Alex Castro, quoted in The Economist, 7 June 2007: "[Investors] were put off by the term 'voice recognition' which, like 'artificial intelligence', is associated with systems that have all too often failed to live up to their promises."[47]
Patty Tascarella in Pittsburgh Business Times, 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."[48]
John Markoff in the New York Times, 2005: "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[49]
In the late 1990's and early 21st century, AI technology became widely used as elements of larger systems,[51][52] but the field is rarely credited for these successes. In 2006, Nick Bostrom explained that "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[53]Rodney Brooks stated around the same time that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[54]
AI has reached the highest levels of interest and funding in its history in the early 2020s by every possible measure, including:
publications,[55]
patent applications,[56]
total investment ($50 billion in 2022),[57] and
job openings (800,000 U.S. job openings in 2022).[58]
The successes of the current "AI spring" or "AI boom" are advances in language translation (in particular, Google Translate), image recognition (spurred by the ImageNet training database) as commercialized by Google Image Search, and in game-playing systems such as AlphaZero (chess champion) and AlphaGo (go champion), and Watson (Jeopardy champion). A turning point was in 2012 when AlexNet (a deep learning network) won the ImageNet Large Scale Visual Recognition Challenge with half as many errors as the second place winner.[59]
The 2022 release of OpenAI's AI chatbot ChatGPT which as of January 2023 has over 100 million users,[60] has reinvigorated the discussion about artificial intelligence and its effects on the world.[61][62]
^Different sources use different dates for the AI winter. Consider: (1) Howe 1994: "Lighthill's [1973] report provoked a massive loss of confidence in AI by the academic establishment in the UK (and to a lesser extent in the US). It persisted for a decade ― the so-called '"AI Winter'", (2) Russell & Norvig 2003, p. 24: "Overall, the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Soon after that came a period called the 'AI Winter'"
^Warren Weaver (1949). "Translation". In William N. Locke; A. Donald Booth (eds.). Machine translation of languages: fourteen essays(PDF). Cambridge, MA; New York: Technology Press of the Massachusetts Institute of Technology; John Wiley & Sons. pp. 15–23. ISBN9780262120029.
^Engelmore, Robert; Morgan, Tony (1988). Blackboard Systems. Addison-Wesley. p. 25. ISBN0-201-17431-6.
^Crevier 1993, pp. 115–116 (on whom this account is based). Other views include McCorduck 2004, pp. 306–313 and NRC 1999 under "Success in Speech Recognition".
Lighthill, Professor Sir James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council.
NRC (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press. Archived from the original on 12 January 2008. Retrieved 30 August 2007.{{cite book}}: CS1 maint: bot: original URL status unknown (link)
Newquist, HP (1994). The Brain Makers: Genius, Ego, and Greed In The Search For Machines That Think. Macmillan/SAMS. ISBN978-0-9885937-1-8.
Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
Gursoy F and Kakadiaris IA (2023) Artificial intelligence research strategy of the United States: critical assessment and policy recommendations. Front. Big Data 6:1206139. doi: 10.3389/fdata.2023.1206139: Global trends in AI research and development are being largely influenced by the US. Such trends are very important for the field's future, especially in terms of allocating funds to avoid a second AI Winter, advance the betterment of society, and guarantee society's safe transition to the new sociotechnical paradigm. This paper examines, through a critical lens, the official AI R&D strategies of the US government in light of this urgent issue. It makes six suggestions to enhance AI research strategies in the US as well as globally.
Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
Other Freddy II Robot Resources Includes a link to the 90 minute 1973 "Controversy" debate from the Royal Academy of Lighthill vs. Michie, McCarthy and Gregory in response to Lighthill's report to the British government.
Brightest star in the constellation Ursa Minor North Star redirects here. For other uses, see Pole star, North Star (disambiguation), Polaris (disambiguation), Operation Stella Polaris, and Stella Maris. Polaris Location of Polaris (circled) Observation dataEpoch J2000 Equinox Constellation Ursa Minor Pronunciation /pəˈlɛərɪs, -ˈlær-/;UK: /pəˈlɑːrɪs/[1] α UMi A Right ascension 02h 31m 49.09s[2] Declination +89° 15′...
No debe confundirse con la escuela Sakya de la tradición budista tibetana. No debe confundirse con rueda persa. Para otros usos de este término, véase Sakya. Sakia fue un clan indio de la Dinastía solar de la Edad del Hierro en el subcontinente indio[1] una expresión exacta derivada sería: «de Sajijaya vendrá Sakya, de Sakya vendrá Suddhoda, y de Suddhoda vendrá Langala, por ejemplo Rahula (como el Buda Sidarta abdicó al trono), de Langana vendrá Prasenajit y de Prasenajit�...
Upcoming video game 2024 video gameWarhammer 40,000: Space Marine IIDeveloper(s)Saber InteractivePublisher(s)Focus EntertainmentSeriesWarhammer 40,000Platform(s)PlayStation 5WindowsXbox Series X/SReleaseQ3/Q4 2024Genre(s)Third-person shooter, hack-n-slashMode(s)Single-player, multiplayer Warhammer 40,000: Space Marine II is an upcoming third-person shooter hack-n-slash video game developed by Saber Interactive and published by Focus Entertainment. A sequel to Warhammer 40,000: Space Marine (2...
This article relies largely or entirely on a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources.Find sources: 2010 Epping Forest District Council election – news · newspapers · books · scholar · JSTOR (March 2021) 2010 Epping Forest District Council election ← 2008 6 May 2010 2011 → 20 of 58 seats on Epping Forest District Council30 seat...
Die afrikanische Literatur umfasst Literaturen in verschiedenen Sprachen – europäischen und afrikanischen – mit verschiedenen Stilen und Themen sowie historischen Hintergründen. Themen, die in vielen Literaturen Subsahara-Afrikas ebenso wie in den Werken maghrebinischer Autoren immer wieder auftauchen, sind die Kolonialgeschichte und Kolonialkriege, die Enttäuschungen der nachkolonialen Zeit aufgrund der Gewaltherrschaft und Korruption der Eliten und des daraus folgenden Zerfalls der G...
Líder de la oposición Escudo nacional Alberto Núñez Feijóo Desde el 2 de abril de 2022Ámbito EspañaTratamiento Excelentísimo/a señor/aCreación 1977Primer titular Felipe González[editar datos en Wikidata] El líder de la oposición, también llamado jefe de la oposición, es un título honorario, convencional y no oficial ejercido tradicionalmente por el líder del partido de la oposición con mayor representación parlamentaria en el Congreso de los Diputados. Su tratamie...
2006 novel by Philippa Gregory This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: The Boleyn Inheritance – news · newspapers · books · scholar · JSTOR (September 2013) (Learn how and when to remove this template message) The Boleyn Inheritance First US editionAuthorPhilippa GregoryAudio read byBianca ...
Russian boyar (d. 1680) The Khitrovo Monument in Ulyanovsk, the city he founded Bogdan Matveyevich Khitrovo (Russian: Богдан Матвеевич Хитрово) (ca. 1615 – 27 March 1680) was a high-placed Russian statesman, or boyar, who served Tsar Alexis and his son Fyodor III, supporting the party of Maria Miloslavskaya. He is also noted for his patronage of icon-painter Simon Ushakov and Simeon of Polotsk, the first Russian poet. It appears likely that Khitrovo was born in Grigory...
Sports news website The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject. You may improve this article, discuss the issue on the talk page, or create a new article, as appropriate. (November 2012) (Learn how and when to remove this template message) Yahoo! SportsYahoo Sports HomepageType of siteSportsOwnerYahoo Inc.Created byYahoo!URLsports.yahoo.comLaunchedDecember 8, 1997; 25 years ago ...
Public agencies responsible for child welfare in Norway The Norwegian Child Welfare Services (Norwegian: Barnevernet, literally child protection) is the public agency responsible for child welfare in Norway. They consist of services in each municipality, which are aided and supervised by different governmental bodies at the state as well as the county level. The Child Welfare Services’ statutory obligation is to ensure that children and youth who live in conditions that may be detrimental t...
Sẻ thông Trung QuốcTình trạng bảo tồnÍt quan tâm (IUCN 3.1)Phân loại khoa họcGiới (regnum)AnimaliaNgành (phylum)ChordataLớp (class)AvesBộ (ordo)PasseriformesHọ (familia)FringillidaeChi (genus)ChlorisLoài (species)C. sinicaDanh pháp hai phầnChloris sinicaLinnaeus, 1766 Danh pháp đồng nghĩa Carduelis sinica Sẻ thông Trung Quốc[1] Hay còn gọi là Sẻ thông đầu xám (tên khoa học: Chloris sinica) là một loài chim trong họ Fri...
Kazakhstan media company Qazaqstan Radio and Television CorporationTypeStatutory corporationIndustryMass mediaFounded8 March 1958 (1958-03-08)HeadquartersQazMedia Ortalygy, AstanaArea servedKazakhstanKey peopleLyazzat Tanysbay[1]ProductsBroadcasting, radio, web portalsOwnerGovernment of KazakhstanWebsitertrk.kz Qazaqstan Radio and Television Corporation (Kazakh: «Қазақстан» РТРК» АҚ; Qazaqstan RTRK AQ) is one of the largest media companies in Kazakhsta...
1998 Brazilian general election ← 1994 4 October 1998 2002 → Presidential electionTurnout78.51% Candidate Fernando Henrique Cardoso Luiz Inácio Lula da Silva Ciro Gomes Party PSDB PT PPS Alliance Union, Work and Progress Union of People Change Brazil Real and Fair Brazil Running mate Marco Maciel Leonel Brizola Roberto Freire Popular vote 35,922,692 21,470,333 7,424,783 Percentage 53.06% 31.71% 10.97% Presidential election results by state President bef...
1973 story collection by Ramsey Campbell Demons by Daylight Dust-jacket illustration by Eddie Jones.AuthorRamsey CampbellCover artistEddie JonesCountryUnited StatesLanguageEnglishGenreFantasy, horrorPublisherArkham HousePublication date1973Media typePrint (hardback)Pages153 Demons by Daylight is a collection of stories by English author Ramsey Campbell. Released in 1973, it was the author's second short story collection, after The Inhabitant of the Lake and Less Welcome Tenants. Lik...
Wender Taxol total synthesis overview from raw material perspective Wender Taxol total synthesis in organic chemistry describes a Taxol total synthesis (one of six to date) by the group of Paul Wender at Stanford University published in 1997.[1][2] This synthesis has much in common with the Holton Taxol total synthesis in that it is a linear synthesis starting from a naturally occurring compound with ring construction in the order A,B,C,D. The Wender effort is shorter by appro...
هذه مقالة غير مراجعة. ينبغي أن يزال هذا القالب بعد أن يراجعها محرر؛ إذا لزم الأمر فيجب أن توسم المقالة بقوالب الصيانة المناسبة. يمكن أيضاً تقديم طلب لمراجعة المقالة في الصفحة المخصصة لذلك. (يونيو 2023) السلالة الحاكمة التيموريةتيمور, مؤسس السلالة التيمورية الحاكمةمعلومات عا�...
СелоНиколино 52°11′11″ с. ш. 42°45′52″ в. д.HGЯO Страна Россия Субъект Федерации Тамбовская область Муниципальный район Инжавинский Сельское поселение Николинский сельсовет История и география Высота центра 147 м Часовой пояс UTC+3:00 Население Население ↘382[1]...