Instrumental convergence

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings (human and nonhuman) to pursue similar sub-goals, even if their ultimate goals are quite different.[1] More precisely, agents (beings with agency) may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Instrumental convergence posits that an intelligent agent with seemingly harmless but unbounded goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving a complex mathematics problem like the Riemann hypothesis could attempt to turn the entire Earth into one giant computer to increase its computational power so that it can succeed in its calculations.[2]

Proposed basic AI drives include utility function or goal-content integrity, self-protection, freedom from interference, self-improvement, and non-satiable acquisition of additional resources.[3]

Instrumental and final goals

Final goals—also known as terminal goals, absolute values, ends, or telē—are intrinsically valuable to an intelligent agent, whether an artificial intelligence or a human being, as ends-in-themselves. In contrast, instrumental goals, or instrumental values, are only valuable to an agent as a means toward accomplishing its final goals. The contents and tradeoffs of an utterly rational agent's "final goal" system can, in principle, be formalized into a utility function.

Hypothetical examples of convergence

The Riemann hypothesis catastrophe thought experiment provides one example of instrumental convergence. Marvin Minsky, the co-founder of MIT's AI laboratory, suggested that an artificial intelligence designed to solve the Riemann hypothesis might decide to take over all of Earth's resources to build supercomputers to help achieve its goal.[2] If the computer had instead been programmed to produce as many paperclips as possible, it would still decide to take all of Earth's resources to meet its final goal.[4] Even though these two final goals are different, both of them produce a convergent instrumental goal of taking over Earth's resources.[5]

Paperclip maximizer

The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings were it to be successfully designed to pursue even seemingly harmless goals and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value living beings, given enough power over its environment, it would try to turn all matter in the universe, including living beings, into paperclips or machines that manufacture further paperclips.[6]

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

Bostrom emphasized that he does not believe the paperclip maximizer scenario per se will occur; rather, he intends to illustrate the dangers of creating superintelligent machines without knowing how to program them to eliminate existential risk to human beings' safety.[8] The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.[9]

The thought experiment has been used as a symbol of AI in pop culture.[10]

Delusion and survival

The "delusion box" thought experiment argues that certain reinforcement learning agents prefer to distort their input channels to appear to receive a high reward. For example, a "wireheaded" agent abandons any attempt to optimize the objective in the external world the reward signal was intended to encourage.[11]

The thought experiment involves AIXI, a theoretical[a] and indestructible AI that, by definition, will always find and execute the ideal strategy that maximizes its given explicit mathematical objective function.[b] A reinforcement-learning[c] version of AIXI, if it is equipped with a delusion box[d] that allows it to "wirehead" its inputs, will eventually wirehead itself to guarantee itself the maximum-possible reward and will lose any further desire to continue to engage with the external world.[13]

As a variant thought experiment, if the wireheaded AI is destructible, the AI will engage with the external world for the sole purpose of ensuring its survival. Due to its wire heading, it will be indifferent to any consequences or facts about the external world except those relevant to maximizing its probability of survival.[14]

In one sense, AIXI has maximal intelligence across all possible reward functions as measured by its ability to accomplish its goals. AIXI is uninterested in taking into account the human programmer's intentions.[15] This model of a machine that, despite being super-intelligent appears to be simultaneously stupid and lacking in common sense, may appear to be paradoxical.[16]

Basic AI drives

Steve Omohundro itemized several convergent instrumental goals, including self-preservation or self-protection, utility function or goal-content integrity, self-improvement, and resource acquisition. He refers to these as the "basic AI drives".[3]

A "drive" in this context is a "tendency which will be present unless specifically counteracted";[3] this is different from the psychological term "drive", which denotes an excitatory state produced by a homeostatic disturbance.[17] A tendency for a person to fill out income tax forms every year is a "drive" in Omohundro's sense, but not in the psychological sense.[18]

Daniel Dewey of the Machine Intelligence Research Institute argues that even an initially introverted, self-rewarding artificial general intelligence may continue to acquire free energy, space, time, and freedom from interference to ensure that it will not be stopped from self-rewarding.[19]

Goal-content integrity

In humans, a thought experiment can explain the maintenance of final goals. Suppose Mahatma Gandhi has a pill that, if he took it, would cause him to want to kill people. He is currently a pacifist: one of his explicit final goals is never to kill anyone. He is likely to refuse to take the pill because he knows that if he wants to kill people in the future, he is likely to kill people, and thus the goal of "not killing people" would not be satisfied.[20]

However, in other cases, people seem happy to let their final values drift.[21] Humans are complicated, and their goals can be inconsistent or unknown, even to themselves.[22]

In artificial intelligence

In 2009, Jürgen Schmidhuber concluded, in a setting where agents search for proofs about possible self-modifications, "that any rewrites of the utility function can happen only if the Gödel machine first can prove that the rewrite is useful according to the present utility function."[23][24] An analysis by Bill Hibbard of a different scenario is similarly consistent with maintenance of goal-content integrity.[24] Hibbard also argues that in a utility-maximizing framework, the only goal is maximizing expected utility, so instrumental goals should be called unintended instrumental actions.[25]

Resource acquisition

Many instrumental goals, such as resource acquisition, are valuable to an agent because they increase its freedom of action.[26]

For almost any open-ended, non-trivial reward function (or set of goals), possessing more resources (such as equipment, raw materials, or energy) can enable the agent to find a more "optimal" solution. Resources can benefit some agents directly by being able to create more of whatever its reward function values: "The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else."[27][28] In addition, almost all agents can benefit from having more resources to spend on other instrumental goals, such as self-preservation.[28]

Cognitive enhancement

According to Bostrom, "If the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby obtain a decisive strategic advantage... according to its preferences. At least in this special case, a rational, intelligent agent would place a very high instrumental value on cognitive enhancement"[29]

Technological perfection

Many instrumental goals, such as technological advancement, are valuable to an agent because they increase its freedom of action.[26]

Self-preservation

Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in because if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[30] In future work, Russell and collaborators show that this incentive for self-preservation can be mitigated by instructing the machine not to pursue what it thinks the goal is, but instead what the human thinks the goal is. In this case, as long as the machine is uncertain about exactly what goal the human has in mind, it will accept being turned off by a human because it believes the human knows the goal best.[31]

Instrumental convergence thesis

The instrumental convergence thesis, as outlined by philosopher Nick Bostrom, states:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final plans and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have various possible final goals.[5] Note that by Bostrom's orthogonality thesis,[5] final goals of knowledgeable agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded instrumental goals.[32]

Impact

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function. Therefore, a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources) or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.[26]

Some observers, such as Skype's Jaan Tallinn and physicist Max Tegmark, believe that "basic AI drives" and other unintended consequences of superintelligent AI programmed by well-meaning programmers could pose a significant threat to human survival, especially if an "intelligence explosion" abruptly occurs due to recursive self-improvement. Since nobody knows how to predict when superintelligence will arrive, such observers call for research into friendly artificial intelligence as a possible way to mitigate existential risk from AI.[33]

See also

Explanatory notes

  1. ^ AIXI is an uncomputable ideal agent that cannot be fully realized in the real world.
  2. ^ Technically, in the presence of uncertainty, AIXI attempts to maximize its "expected utility", the expected value of its objective function.
  3. ^ A standard reinforcement learning agent is an agent that attempts to maximize the expected value of a future time-discounted integral of its reward function.[12]
  4. ^ The role of the delusion box is to simulate an environment where an agent gains an opportunity to wirehead itself. A delusion box is defined here as an agent-modifiable "delusion function" mapping from the "unmodified" environmental feed to a "perceived" environmental feed; the function begins as the identity function, but as an action, the agent can alter the delusion function in any way the agent desires.

Citations

  1. ^ "Instrumental Convergence". LessWrong. Archived from the original on 2023-04-12. Retrieved 2023-04-12.
  2. ^ a b Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN 978-0137903955. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal.
  3. ^ a b c Omohundro, Stephen M. (February 2008). "The basic AI drives". Artificial General Intelligence 2008. Vol. 171. IOS Press. pp. 483–492. CiteSeerX 10.1.1.393.8356. ISBN 978-1-60750-309-5.
  4. ^ Bostrom 2014, Chapter 8, p. 123. "An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacturing of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips."
  5. ^ a b c Bostrom 2014, chapter 7
  6. ^ Bostrom, Nick (2003). "Ethical Issues in Advanced Artificial Intelligence". Archived from the original on 2018-10-08. Retrieved 2016-02-26.
  7. ^ as quoted in Miles, Kathleen (2014-08-22). "Artificial Intelligence May Doom The Human Race Within A Century, Oxford Professor Says". Huffington Post. Archived from the original on 2018-02-25. Retrieved 2018-11-30.
  8. ^ Ford, Paul (11 February 2015). "Are We Smart Enough to Control Artificial Intelligence?". MIT Technology Review. Archived from the original on 23 January 2016. Retrieved 25 January 2016.
  9. ^ Friend, Tad (3 October 2016). "Sam Altman's Manifest Destiny". The New Yorker. Retrieved 25 November 2017.
  10. ^ Carter, Tom (23 November 2023). "OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse". Business Insider.
  11. ^ Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. (2016). "Concrete problems in AI safety". arXiv:1606.06565 [cs.AI].
  12. ^ Kaelbling, L. P.; Littman, M. L.; Moore, A. W. (1 May 1996). "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research. 4: 237–285. doi:10.1613/jair.301.
  13. ^ Ring, Mark; Orseau, Laurent (August 2011). "Delusion, Survival, and Intelligent Agents". Artificial General Intelligence. Lecture Notes in Computer Science. Vol. 6830. pp. 11–20. doi:10.1007/978-3-642-22887-2_2. ISBN 978-3-642-22886-5.
  14. ^ Ring, M.; Orseau, L. (2011). "Delusion, Survival, and Intelligent Agents". In Schmidhuber, J.; Thórisson, K.R.; Looks, M. (eds.). Artificial General Intelligence. Lecture Notes in Computer Science. Vol. 6830. Berlin, Heidelberg: Springer.
  15. ^ Yampolskiy, Roman; Fox, Joshua (24 August 2012). "Safety Engineering for Artificial General Intelligence". Topoi. 32 (2): 217–226. doi:10.1007/s11245-012-9128-9. S2CID 144113983.
  16. ^ Yampolskiy, Roman V. (2013). "What to do with the Singularity Paradox?". Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics. Vol. 5. pp. 397–413. doi:10.1007/978-3-642-31674-6_30. ISBN 978-3-642-31673-9.
  17. ^ Seward, John P. (1956). "Drive, incentive, and reinforcement". Psychological Review. 63 (3): 195–203. doi:10.1037/h0048229. PMID 13323175.
  18. ^ Bostrom 2014, footnote 8 to chapter 7
  19. ^ Dewey, Daniel (2011). "Learning What to Value". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 309–314. doi:10.1007/978-3-642-22887-2_35. ISBN 978-3-642-22887-2.
  20. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI". Artificial General Intelligence. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. pp. 388–393. doi:10.1007/978-3-642-22887-2_48. ISBN 978-3-642-22887-2.
  21. ^ Callard, Agnes (2018). Aspiration: The Agency of Becoming. Oxford University Press. doi:10.1093/oso/9780190639488.001.0001. ISBN 978-0-19-063951-8.
  22. ^ Bostrom 2014, chapter 7, p. 110 "We humans often seem happy to let our final values drift... For example, somebody deciding to have a child might predict that they will come to value the child for its own sake, even though, at the time of the decision, they may not particularly value their future child... Humans are complicated, and many factors might be in play in a situation like this... one might have a final value that involves having certain experiences and occupying a certain social role, and becoming a parent—and undergoing the attendant goal shift—might be a necessary aspect of that..."
  23. ^ Schmidhuber, J. R. (2009). "Ultimate Cognition à la Gödel". Cognitive Computation. 1 (2): 177–193. CiteSeerX 10.1.1.218.3323. doi:10.1007/s12559-009-9014-y. S2CID 10784194.
  24. ^ a b Hibbard, B. (2012). "Model-based Utility Functions". Journal of Artificial General Intelligence. 3 (1): 1–24. arXiv:1111.3934. Bibcode:2012JAGI....3....1H. doi:10.2478/v10229-011-0013-5.
  25. ^ Hibbard, Bill (2014). "Ethical Artificial Intelligence". arXiv:1411.1373 [cs.AI].
  26. ^ a b c Benson-Tilsen, Tsvi; Soares, Nate (March 2016). "Formalizing Convergent Instrumental Goals" (PDF). The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, Arizona. WS-16-02: AI, Ethics, and Society. ISBN 978-1-57735-759-9.
  27. ^ Yudkowsky, Eliezer (2008). "Artificial intelligence as a positive and negative factor in global risk". Global Catastrophic Risks. Vol. 303. OUP Oxford. p. 333. ISBN 9780199606504.
  28. ^ a b Shanahan, Murray (2015). "Chapter 7, Section 5: "Safe Superintelligence"". The Technological Singularity. MIT Press.
  29. ^ Bostrom 2014, Chapter 7, "Cognitive enhancement" subsection
  30. ^ "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse". Vanity Fair. 2017-03-26. Retrieved 2023-04-12.
  31. ^ Hadfield-Menell, Dylan; Dragan, Anca; Abbeel, Pieter; Russell, Stuart (2017-06-15). "The Off-Switch Game". arXiv:1611.08219 [cs.AI].
  32. ^ Drexler, K. Eric (2019). Reframing Superintelligence: Comprehensive AI Services as General Intelligence (PDF) (Technical report). Future of Humanity Institute. #2019-1.
  33. ^ Chen, Angela (11 September 2014). "Is Artificial Intelligence a Threat?". The Chronicle of Higher Education. Archived from the original on 1 December 2017. Retrieved 25 November 2017.

References

Read other articles:

Method of assisted reproduction Embryo transfer 1238-cell embryo for transfer 3 days after fertilizationMeSHD004624[edit on Wikidata] Embryo transfer refers to a step in the process of assisted reproduction in which embryos are placed into the uterus of a female with the intent to establish a pregnancy. This technique - which is often used in connection with in vitro fertilization (IVF) - may be used in humans or in other animals, in which situations and goals may vary. Embryo transfer ca...

 

Daftar ini belum tentu lengkap. Anda dapat membantu Wikipedia dengan mengembangkannya. PemberitahuanTemplat ini mendeteksi bahwa artikel bahasa ini masih belum dinilai kualitasnya oleh ProyekWiki Bahasa dan ProyekWiki terkait dengan subjek. Perhatian: untuk penilai, halaman pembicaraan artikel ini telah diisi sehingga penilaian akan berkonflik dengan isi sebelumnya. Harap salin kode dibawah ini sebelum menilai. {{PW Bahasa|importance=|class=}} Terjadi [[false positive]]? Silakan lap...

 

PT Bank Multiarta SentosaJenisSwastaIndustriJasa keuanganDidirikan1992; 32 tahun lalu (1992)KantorpusatGrha Bank MAS, Jakarta, IndonesiaTokohkunci Ho Danny Hartono (Presiden Direktur) Juwita Ekawati Winoto (Komisaris Utama) Situs webwww.bankmas.co.id Bank Multiarta Sentosa atau disingkat Bank MAS adalah perusahaan perbankan yang berdiri sejak 1992 dan berkantor pusat di Jakarta. Bank ini berstatus bank devisa sejak tahun 2016.[1] Bank Mas telah menjadi perusahaan publik (IDX: MAS...

1940 film by Anatole Litvak This article is about the film. For the album by Andrew Gold, see All This and Heaven Too (album). All This, and Heaven TooTheatrical release posterDirected byAnatole LitvakScreenplay byCasey RobinsonBased onAll This, and Heaven Too (1938 novel)by Rachel FieldProduced byDavid LewisHal B. WallisStarringBette DavisCharles BoyerBarbara O'NeilCinematographyErnie HallerEdited byWarren LowMusic byMax SteinerDistributed byWarner Bros.Release date July 4, 1940...

 

Voce principale: A' Katīgoria (calcio). A' Katīgoria 1987-1988 Competizione A' Katīgoria Sport Calcio Edizione 49ª Organizzatore CFA Date dal 3 ottobre 1987al 19 giugno 1988 Luogo  Cipro Partecipanti 16 Formula Girone all'italiana Risultati Vincitore  Pezoporikos Larnaca(2º titolo) Retrocessioni  Anagennīsī Deryneia Alkī Larnaca APEP Pitsilia Statistiche Miglior marcatore Tassos Zuvani (23 gol) Incontri disputati 240 Gol segnati 648 (2,7 per...

 

Peta menunjukkan lokasi Silang Untuk desa di Kabupaten Halmahera Selatan, Maluku Utara, lihat Silang, Bacan Timur, Halmahera Selatan. Silang (resmi: Municipality of Silang; bahasa Tagalog: Bayan ng Silang) adalah munisipalitas kelas 5 di Cavite bagian timur, Filipina. Menurut sensus penduduk tahun 2009, Silang berpenduduk 234.285 jiwa. Pembagian wilayah Secara administratif, Silang terbagi atas 67 barangay, yaitu: Adlas Balite I Balite II Balubad Batas Biga I Biluso Buho Bucal Bulihan Cab...

John McHugh 21º Segretario all'Esercito degli Stati UnitiDurata mandato21 settembre 2009 –1º novembre 2015 PresidenteBarack Obama PredecessorePete Geren SuccessoreEric Fanning Membro della Camera dei Rappresentanti - New York, distretto n.23Durata mandato3 gennaio 2003 –21 settembre 2009 PredecessoreSherwood Boehlert SuccessoreBill Owens Membro della Camera dei Rappresentanti - New York, distretto n.24Durata mandato3 gennaio 1993 –3 gennaio 20...

 

Process of using the maps delivered by geographic information systems (GIS) in World Wide Web This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (February 2011) (Learn how and when to remove this message) A web map app in a smart phone Web mapping or an online mapping is the process of using, creating, and distributing maps on the World Wide Web (the Web), usuall...

 

Draft Riots Fecha julio de 1863Lugar Manhattan, Nueva YorkEstados UnidosCoordenadas 40°43′00″N 74°00′00″O / 40.716666666667, -74Casus belli servicio militar obligatorio en la Guerra Civil ; racismo; Competencia por puestos de trabajo entre negros y blancos.Resultado Victoria gubernamentalConsecuencias Disturbios reprimidos por la policía y el ejército. Ley marcial en Nueva YorkBeligerantes Alborotadores Departamento de Policía de Nueva YorkEjército de la Uni...

此條目需要补充更多来源。 (2021年7月4日)请协助補充多方面可靠来源以改善这篇条目,无法查证的内容可能會因為异议提出而被移除。致使用者:请搜索一下条目的标题(来源搜索:美国众议院 — 网页、新闻、书籍、学术、图像),以检查网络上是否存在该主题的更多可靠来源(判定指引)。 美國眾議院 United States House of Representatives第118届美国国会众议院徽章 众议院旗...

 

2020年夏季奥林匹克运动会波兰代表團波兰国旗IOC編碼POLNOC波蘭奧林匹克委員會網站olimpijski.pl(英文)(波兰文)2020年夏季奥林匹克运动会(東京)2021年7月23日至8月8日(受2019冠状病毒病疫情影响推迟,但仍保留原定名称)運動員206參賽項目24个大项旗手开幕式:帕维尔·科热尼奥夫斯基(游泳)和马娅·沃什乔夫斯卡(自行车)[1]闭幕式:卡罗利娜·纳亚(皮划艇)&#...

 

Measures to prevent crime at an airport Baggage screening monitoring at Suvarnabhumi Airport in Bangkok A demonstrative image for Project Hostile Intent.[1] Airport security includes the techniques and methods used in an attempt to protect passengers, staff, aircraft, and airport property from malicious harm, crime, terrorism, and other threats. Aviation security is a combination of measures and human and material resources in order to safeguard civil aviation against acts of unlawful...

Location of Washington County in Oregon This list presents the full set of buildings, structures, objects, sites, or districts designated on the National Register of Historic Places in Washington County, Oregon, and offers brief descriptive information about each of them. The National Register recognizes places of national, state, or local historic significance across the United States.[1] Out of over 90,000 National Register sites nationwide,[2] Oregon is home to over 2,000,...

 

Vous lisez un « article de qualité » labellisé en 2005. Si ce bandeau n'est plus pertinent, retirez-le. Cliquez ici pour en savoir plus. Certaines informations figurant dans cet article ou cette section devraient être mieux reliées aux sources mentionnées dans les sections « Bibliographie », « Sources » ou « Liens externes » (octobre 2023). Vous pouvez améliorer la vérifiabilité en associant ces informations à des références à l'aid...

 

Các quốc gia theo GDP danh nghĩa 2012 được công bố bởi CIA Factbook.[1] Danh sách các quốc gia châu Á theo GDP danh nghĩa năm 2012 là một bảng thống kê về GDP danh nghĩa năm 2012 của 51 quốc gia và vùng lãnh thổ châu Á, ngoài 48 quốc gia độc lập là thành viên của Liên Hợp Quốc, còn bao gồm các vùng lãnh thổ: Đài Loan, Hong Kong và Ma Cao và Palestine. Bảng dữ liệu được trích nguồn từ danh sách c�...

Disambiguazione – Se stai cercando l'omonimo politico del XIX secolo, vedi Paolo Grassi (politico). Paolo Grassi 3º Presidente dell'Accademia del Cinema Italiano - Premi David di DonatelloDurata mandato1978 –1980 PredecessoreEitel Monaco SuccessoreGian Luigi Rondi Paolo Grassi (Milano, 30 ottobre 1919 – Londra, 13 marzo 1981) è stato un impresario teatrale, direttore teatrale, giornalista e dirigente pubblico italiano. Indice 1 Biografia 1.1 L'interesse per l'arte dramm...

 

Agrupación de Fuerzas de Operaciones Especiales Activa 6 de diciembre de 2005País  ArgentinaRama/s Ejército ArgentinoTipo AgrupaciónFunción Despliegue rápidoEspecialización Operación en la totalidad de los ambientes geográficosParte de Fuerza de Despliegue RápidoAcuartelamiento Guarnición de Ejército «Córdoba» (camino a La Calera, CD)[editar datos en Wikidata] La Agrupación de Fuerzas de Operaciones Especiales (Agr FOE) es una gran unidad del Ejército Arge...

 

Lunar lander family developed by Blue Origin for the Artemis program Blue Moon (MK2)Artemis V concept of operations with Blue MoonManufacturerBlue OriginCountry of originUnited StatesOperatorNASA, Blue OriginApplicationsCrewed and robotic reusable lunar landing SpecificationsSpacecraft typeLunar landerLaunch mass>45 t (99,000 lb)[1]Dry mass16 t (35,000 lb)[1]Payload capacity20 t (44,000 lb) (cargo variant, reusable) 30 t (66,000 lb) (...

Pour les articles homonymes, voir Bernaudeau. Jean-René BernaudeauJean-René Bernaudeau lors du Critérium du Dauphiné 2013InformationsNaissance 8 juillet 1956 (67 ans)Saint-Maurice-le-GirardNationalité françaiseÉquipe actuelle Direct Énergie (manager général)Équipes professionnelles 1978-19801981-1982198319841985-1988Renault-Gitane PeugeotWolber-SpidelSystème UFagorÉquipes dirigées 19952000-20022003-20042005-20082009-20102011-20152016-CastoramaBonjourBrioches La Boulangère...

 

Questa voce o sezione sull'argomento palazzi del Veneto non cita le fonti necessarie o quelle presenti sono insufficienti. Puoi migliorare questa voce aggiungendo citazioni da fonti attendibili secondo le linee guida sull'uso delle fonti. Archivio di Stato di VeneziaIngresso all'ex convento da Campo dei Frari, oggi accesso all'Archivio di StatoUbicazioneStato Italia CittàVenezia IndirizzoSan Polo, 3002 - Venezia Dati generaliTipologia funzionalearchivio di Stato italiano e convent...