Pooling layer

In neural networks, a pooling layer is a kind of network layer that downsamples and aggregates information that is dispersed among many vectors into fewer vectors.[1] It has several uses. It removes redundant information, reducing the amount of computation and memory required, makes the model more robust to small variations in the input, and increases the receptive field of neurons in later layers in the network.

Convolutional neural network pooling

Pooling is most commonly used in convolutional neural networks (CNN). Below is a description of pooling in 2-dimensional CNNs. The generalization to n-dimensions is immediate.

As notation, we consider a tensor , where is height, is width, and is the number of channels. A pooling layer outputs a tensor .

We define two variables called "filter size" (aka "kernel size") and "stride". Sometimes, it is necessary to use a different filter size and stride for horizontal and vertical directions. In such cases, we define 4 variables .

The receptive field of an entry in the output tensor are all the entries in that can affect that entry.

Max pooling

Worked example of max pooling, with filter size and stride .

Max Pooling (MaxPool) is commonly used in CNNs to reduce the spatial dimensions of feature maps.

Definewhere means the range . Note that we need to avoid the off-by-one error. The next input isand so on. The receptive field of is , so in general,If the horizontal and vertical filter size and strides differ, then in general,More succinctly, we can write .

Three example padding conditions. Replication condition means that the pixel outside is padded with the closest pixel inside. The reflection padding is where the pixel outside is padded with the pixel inside, reflected across the boundary of the image. The circular padding is where the pixel outside wraps around to the other side of the image.

If is not expressible as where is an integer, then for computing the entries of the output tensor on the boundaries, max pooling would attempt to take as inputs variables off the tensor. In this case, how those non-existent variables are handled depends on the padding conditions, illustrated on the right.

Global Max Pooling (GMP) is a specific kind of max pooling where the output tensor has shape and the receptive field of is all of . That is, it takes the maximum over each entire channel. It is often used just before the final fully connected layers in a CNN classification head.

Average pooling

Average pooling (AvgPool) is similarly definedGlobal Average Pooling (GAP) is defined similarly to GMP. It was first proposed in Network-in-Network.[2] Similarly to GMP, it is often used just before the final fully connected layers in a CNN classification head.

Interpolations

There are some interpolations of max pooling and average pooling.

Mixed Pooling is a linear sum of maxpooling and average pooling.[3] That is,

where is either a hyperparameter, a learnable parameter, or randomly sampled anew every time.

Lp Pooling is like average pooling, but uses Lp norm average instead of average:where is the size of receptive field, and is a hyperparameter. If all activations are non-negative, then average pooling is the case of , and maxpooling is the case of . Square-root pooling is the case of .[4]

Stochastic pooling samples a random activation from the receptive field with probability . It is the same as average pooling in expectation.[5]

Softmax pooling is like maxpooling, but uses softmax, i.e. where . Average pooling is the case of , and maxpooling is the case of [4]

Local Importance-based Pooling generalizes softmax pooling by where is a learnable function.[6]

RoI pooling to size 2x2. In this example, the RoI proposal has size 7x5. It is divided into 4 rectangles. Because 7 is not divisible by 2, it is divided to the nearest integers, as 7 = 3 + 4. Similarly, 5 is divided to 2 + 3. This gives 4 sub-rectangles. The maximum of each sub-rectangle is taken. This is the output of the RoI pooling.

Other poolings

Spatial pyramidal pooling applies max pooling (or any other form of pooling) in a pyramid structure. That is, it applies global max pooling, then applies max pooling to the image divided into 4 equal parts, then 16, etc. The results are then concatenated. It is a hierarchical form of global pooling, and similar to global pooling, it is often used just before a classification head.[7]

Region of Interest Pooling (also known as RoI pooling) is a variant of max pooling used in R-CNNs for object detection.[8] It is designed to take an arbitrarily-sized input matrix, and output a fixed-sized output matrix.

Covariance pooling computes the covariance matrix of the vectors which is then flattened to a -dimensional vector . Global covariance pooling is used similarly to global max pooling. As average pooling computes the average, which is a first-degree statistic, and covariance is a second-degree statistic, covariance pooling is also called "second-order pooling". It can be generalized to higher-order poolings.[9][10]

Blur Pooling means applying a blurring method before downsampling. For example, the Rect-2 blur pooling means taking an average pooling at , then taking every second pixel (identity with ).[11]

Vision Transformer pooling

In Vision Transformers (ViT), there are the following common kinds of poolings.

BERT-like pooling uses a dummy [CLS] token ("classification"). For classification, the output at [CLS] is the classification token, which is then processed by a LayerNorm-feedforward-softmax module into a probability distribution, which is the network's prediction of class probability distribution. This is the one used by the original ViT[12] and Masked Autoencoder.[13]

Global average pooling (GAP) does not use the dummy token, but simply takes the average of all output tokens as the classification token. It was mentioned in the original ViT as being equally good.[12]

Multihead attention pooling (MAP) applies a multiheaded attention block to pooling. Specifically, it takes as input a list of vectors , which might be thought of as the output vectors of a layer of a ViT. It then applies a feedforward layer on each vector, resulting in a matrix . This is then sent to a multiheaded attention, resulting in , where is a matrix of trainable parameters.[14] This was first proposed in the Set Transformer architecture.[15]

Later papers demonstrated that GAP and MAP both perform better than BERT-like pooling.[14][16]

Graph neural network pooling

In graph neural networks (GNN), there are also two forms of pooling: global and local. Global pooling can be reduced to a local pooling where the receptive field is the entire output.

  1. Local pooling: a local pooling layer coarsens the graph via downsampling. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in convolutional neural networks. Examples include k-nearest neighbours pooling, top-k pooling,[17] and self-attention pooling.[18]
  2. Global pooling: a global pooling layer, also known as readout layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output.[19] Examples include element-wise sum, mean or maximum.

Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed.[19] For each cases, the input is the initial graph is represented by a matrix of node features, and the graph adjacency matrix . The output is the new matrix of node features, and the new graph adjacency matrix .

Top-k pooling

We first set

where is a learnable projection vector. The projection vector computes a scalar projection value for each graph node.

The top-k pooling layer [17] can then be formalised as follows:

where is the subset of nodes with the top-k highest projection scores, denotes element-wise matrix multiplication, and is the sigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrix . The operation makes the projection vector trainable by backpropagation, which otherwise would produce discrete outputs.[17]

Self-attention pooling

We first set

where is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN).

The Self-attention pooling layer[18] can then be formalised as follows:

where is the subset of nodes with the top-k highest projection scores, denotes element-wise matrix multiplication.

The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.

History

In early 20th century, neuroanatomists noticed a certain motif where multiple neurons synapse to the same neuron. This was given a functional explanation as "local pooling", which makes vision translation-invariant. (Hartline, 1940)[20] gave supporting evidence for the theory by electrophysiological experiments on the receptive fields of retinal ganglion cells. The Hubel and Wiesel experiments showed that the vision system in cats is similar to a convolutional neural network, with some cells summing over inputs from the lower layer.[21]: Fig. 19, 20  See (Westheimer, 1965)[22] for citations to these early literature.

During the 1970s, to explain the effects of depth perception, some such as (Julesz and Chang, 1976)[23] proposed that the vision system implements a disparity-selective mechanism by global pooling, where the outputs from matching pairs of retinal regions in the two eyes are pooled in higher order cells. See [24] for citations to these early literature.

In artificial neural networks, max pooling was used in 1990 for speech processing (1-dimensional convolution).[25]

See also

References

  1. ^ Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "7.5. Pooling". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  2. ^ Lin, Min; Chen, Qiang; Yan, Shuicheng (2013). "Network In Network". arXiv:1312.4400 [cs.NE].
  3. ^ Yu, Dingjun; Wang, Hanli; Chen, Peiqiu; Wei, Zhihua (2014). "Mixed Pooling for Convolutional Neural Networks". In Miao, Duoqian; Pedrycz, Witold; Ślȩzak, Dominik; Peters, Georg; Hu, Qinghua; Wang, Ruizhi (eds.). Rough Sets and Knowledge Technology. Lecture Notes in Computer Science. Vol. 8818. Cham: Springer International Publishing. pp. 364–375. doi:10.1007/978-3-319-11740-9_34. ISBN 978-3-319-11740-9.
  4. ^ a b Boureau, Y-Lan; Ponce, Jean; LeCun, Yann (2010-06-21). "A theoretical analysis of feature pooling in visual recognition". Proceedings of the 27th International Conference on International Conference on Machine Learning. ICML'10. Madison, WI, USA: Omnipress: 111–118. ISBN 978-1-60558-907-7.
  5. ^ Zeiler, Matthew D.; Fergus, Rob (2013-01-15). "Stochastic Pooling for Regularization of Deep Convolutional Neural Networks". arXiv:1301.3557 [cs.LG].
  6. ^ Gao, Ziteng; Wang, Limin; Wu, Gangshan (2019). "LIP: Local Importance-Based Pooling": 3355–3364. arXiv:1908.04156. {{cite journal}}: Cite journal requires |journal= (help)
  7. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2015-09-01). "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 37 (9): 1904–1916. arXiv:1406.4729. doi:10.1109/TPAMI.2015.2389824. ISSN 0162-8828. PMID 26353135.
  8. ^ Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "14.8. Region-based CNNs (R-CNNs)". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  9. ^ Tuzel, Oncel; Porikli, Fatih; Meer, Peter (2006). "Region Covariance: A Fast Descriptor for Detection and Classification". In Leonardis, Aleš; Bischof, Horst; Pinz, Axel (eds.). Computer Vision – ECCV 2006. Vol. 3952. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 589–600. doi:10.1007/11744047_45. ISBN 978-3-540-33834-5. Retrieved 2024-09-09.
  10. ^ Wang, Qilong; Xie, Jiangtao; Zuo, Wangmeng; Zhang, Lei; Li, Peihua (2020). "Deep CNNs Meet Global Covariance Pooling: Better Representation and Generalization". IEEE Transactions on Pattern Analysis and Machine Intelligence. 43 (8): 2582–2597. arXiv:1904.06836. doi:10.1109/TPAMI.2020.2974833. ISSN 0162-8828. PMID 32086198.
  11. ^ Zhang, Richard (2018-09-27). "Making Convolutional Networks Shift-Invariant Again". arXiv:1904.11486. {{cite journal}}: Cite journal requires |journal= (help)
  12. ^ a b Dosovitskiy, Alexey; Beyer, Lucas; Kolesnikov, Alexander; Weissenborn, Dirk; Zhai, Xiaohua; Unterthiner, Thomas; Dehghani, Mostafa; Minderer, Matthias; Heigold, Georg; Gelly, Sylvain; Uszkoreit, Jakob (2021-06-03). "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale". arXiv:2010.11929 [cs.CV].
  13. ^ He, Kaiming; Chen, Xinlei; Xie, Saining; Li, Yanghao; Dollar, Piotr; Girshick, Ross (June 2022). "Masked Autoencoders Are Scalable Vision Learners". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 15979–15988. arXiv:2111.06377. doi:10.1109/cvpr52688.2022.01553. ISBN 978-1-6654-6946-3.
  14. ^ a b Zhai, Xiaohua; Kolesnikov, Alexander; Houlsby, Neil; Beyer, Lucas (June 2022). "Scaling Vision Transformers". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 1204–1213. arXiv:2106.04560. doi:10.1109/cvpr52688.2022.01179. ISBN 978-1-6654-6946-3.
  15. ^ Lee, Juho; Lee, Yoonho; Kim, Jungtaek; Kosiorek, Adam; Choi, Seungjin; Teh, Yee Whye (2019-05-24). "Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks". Proceedings of the 36th International Conference on Machine Learning. PMLR: 3744–3753. arXiv:1810.00825.
  16. ^ Karamcheti, Siddharth; Nair, Suraj; Chen, Annie S.; Kollar, Thomas; Finn, Chelsea; Sadigh, Dorsa; Liang, Percy (2023-02-24). "Language-Driven Representation Learning for Robotics". arXiv:2302.12766 [cs.RO].
  17. ^ a b c Gao, Hongyang; Ji, Shuiwang Ji (2019). "Graph U-Nets". arXiv:1905.05178 [cs.LG].
  18. ^ a b Lee, Junhyun; Lee, Inyeop; Kang, Jaewoo (2019). "Self-Attention Graph Pooling". arXiv:1904.08082 [cs.LG].
  19. ^ a b Liu, Chuang; Zhan, Yibing; Li, Chang; Du, Bo; Wu, Jia; Hu, Wenbin; Liu, Tongliang; Tao, Dacheng (2022). "Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities". arXiv:2204.07321 [cs.LG].
  20. ^ Hartline, H. K. (1940-09-30). "The Receptive Fields of Optic Nerve Fibers". American Journal of Physiology. Legacy Content. 130 (4): 690–699. doi:10.1152/ajplegacy.1940.130.4.690. ISSN 0002-9513.
  21. ^ Hubel, D. H.; Wiesel, T. N. (January 1962). "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex". The Journal of Physiology. 160 (1): 106–154.2. doi:10.1113/jphysiol.1962.sp006837. ISSN 0022-3751. PMC 1359523. PMID 14449617.
  22. ^ Westheimer, G (December 1965). "Spatial interaction in the human retina during scotopic vision". The Journal of Physiology. 181 (4): 881–894. doi:10.1113/jphysiol.1965.sp007803. ISSN 0022-3751. PMC 1357689. PMID 5881260.
  23. ^ Julesz, Bela; Chang, Jih Jie (March 1976). "Interaction between pools of binocular disparity detectors tuned to different disparities". Biological Cybernetics. 22 (2): 107–119. doi:10.1007/BF00320135. ISSN 0340-1200. PMID 1276243.
  24. ^ Schumer, Robert; Ganz, Leo (1979-01-01). "Independent stereoscopic channels for different extents of spatial pooling". Vision Research. 19 (12): 1303–1314. doi:10.1016/0042-6989(79)90202-5. ISSN 0042-6989. PMID 532098.
  25. ^ Yamaguchi, Kouichi; Sakamoto, Kenji; Akabane, Toshio; Fujimoto, Yoshiji (November 1990). A Neural Network for Speaker-Independent Isolated Word Recognition. First International Conference on Spoken Language Processing (ICSLP 90). Kobe, Japan. Archived from the original on 2021-03-07. Retrieved 2019-09-04.

Read other articles:

Untuk kebijakan atau pedoman Wikipedia, lihat Wikipedia:Jangan terbebani aturan. Diagram alir yang menjelaskan arti abaikan peraturan. Abaikan semua peraturan adalah aturan untuk meniadakan semua peraturan lainnya.[1] Abaikan semua peraturan adalah aturan ensiklopedia konten terbuka bahasa Inggris, Wikipedia, dan juga beberapa edisi Wikipedia dalam beberapa bahasa lainnya. Formulasinya umumnya Jika sebuah aturan mencegah Anda memperbaiki atau mempertahankan Wikipedia, abaikan saja (pe...

 

Pour les articles homonymes, voir CSJ. Communauté Saint-Jean Ecce Mater tua Repères historiques Fondation 1975 Fondateur(s) Marie-Dominique Philippe Lieu de fondation Fribourg (Suisse) Siège Rimont (Fley), Saône-et-Loire Fiche d'identité Église Catholique Type Institut religieux de droit diocésain Dirigeant François-Xavier Cazali (frères), Paul-Marie (sœurs contemplatives), Claire-de-Jésus (sœurs apostoliques) Membres 422 frères, 190 sœurs apostoliques, 90 sœurs contemplatives...

 

Dibawah ini adalah daftar penulis yang dilarang di Jerman Nazi. Disusun menurut abjad. A Alfred Adler Berkas:Alfred Adler (1870-1937) Austrian psychiatrist.jpgPsikiatris Austria Alfred Adler. Hermann Adler Max Adler Raoul Auernheimer B Bertolt Brecht. Otto Bauer Vicki Baum Johannes R. Becher Richard Beer-Hofmann Walter Benjamin Walter A. Berendsohn Ernst Bloch Felix Braun Bertolt Brecht Willi Bredel Hermann Broch Ferdinand Bruckner D Ludwig Dexheimer[1] Alfred Döblin John Dos Passos ...

Oleh Bazylevyč Nazionalità  Unione Sovietica Ucraina Altezza 175 cm Peso 70 kg Calcio Ruolo Allenatore (ex attaccante) Termine carriera 1968 - giocatore1997 - allenatore Carriera Squadre di club1 1955-1956FShM Kiev? (?)1957-1965 Dinamo Kiev162 (54)1966 Černomorec35 (6)1967-1968 Šachtër Donec'k32 (9) Carriera da allenatore 1970 Desna Černihiv1971 Šachtar Kadiïvka1972-1973 Šachtër Donec'k1974-1976 Dinamo KievAssistente1974-1976 Unione Sovi...

 

Irish republican (1881–1916) Éamonn CeanntBorn(1881-09-21)21 September 1881Ballymoe, County Galway, IrelandDied8 May 1916(1916-05-08) (aged 34)Kilmainham Gaol, Dublin, IrelandCause of deathExecution by firing squadBuriedArbour Hill Prison, DublinAllegianceIrish VolunteersIrish Republican BrotherhoodYears of service1913–1916RankCommandantCommands held4th BattalionBattles/warsEaster RisingSpouse(s)Áine Ceannt Éamonn Ceannt (21 September 1881 – 8 May 1916), born Edward Th...

 

Study of organic evolution of plants based on fossils This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Paleobotany – news · newspapers · books · scholar · JSTOR (October 2011) (Learn how and when to remove this template message) A fossil Betula leopoldae (birch) leaf from the Early Eocene of Washington state,...

Amerindian language redirects here. For the proposed language family, see Amerind languages. This article's use of external links may not follow Wikipedia's policies or guidelines. Please improve this article by removing excessive or inappropriate external links, and converting useful links where appropriate into footnote references. (January 2022) (Learn how and when to remove this message) Yucatec Maya writing in the Dresden Codex, ca. 11–12th century, Chichen Itza The Indigenous languag...

 

YarmouthThe second Yarmouth station around 1908General informationLocationRailroad Avenue at Cross StreetYarmouth Port, MassachusettsCoordinates41°41′55″N 70°15′30″W / 41.69861°N 70.25833°W / 41.69861; -70.25833Line(s)Cape Cod Main Line, Hyannis SecondaryHistoryOpened1854Rebuilt1878, 1941Former services Preceding station New York, New Haven and Hartford Railroad Following station Barnstabletoward Boston Boston–​Provincetown Bass Rivertoward Provincetown...

 

Curçay-sur-Divecomune Curçay-sur-Dive – Veduta LocalizzazioneStato Francia Regione Nuova Aquitania Dipartimento Vienne ArrondissementChâtellerault CantoneLoudun TerritorioCoordinate47°00′45″N 0°03′20″W / 47.0125°N 0.055556°W47.0125; -0.055556 (Curçay-sur-Dive)Coordinate: 47°00′45″N 0°03′20″W / 47.0125°N 0.055556°W47.0125; -0.055556 (Curçay-sur-Dive) Altitudine197 m s.l.m. Superficie16,06 km² Abitanti2...

Pagina tappeto da un manoscritto miniato. L'ornamento a motivo interlacciato islamico è un tipo di decorazione sviluppato nelle aree islamiche. Esso può essere suddiviso in arabeschi, utilizzando elementi vegetali ricurvi e girih, utilizzando per lo più forme geometriche con linee rette o curve regolari. Entrambe queste forme di arte islamica sono caratterizzate da ricchi intrecci anche nell'arte dell'Impero Bizantino e in quello dell'arte copta. Indice 1 Panoramica 2 Arabesco 3 Girih 4 Ga...

 

Hong Kong actor and filmmaker For other uses, see Stephen Chow (disambiguation). In this Hong Kong name, the surname is Chow. In accordance with Hong Kong custom, the Western-style name is Stephen and the Chinese-style name is Sing-chi. Stephen Chow周星馳Chow in 2008PronunciationJāu SīngchìhBornStephen Chow Sing-chi (1962-06-22) 22 June 1962 (age 61)British Hong KongOccupations Director Actor Comedian Screenwriter Producer Years activeAs Director1994–presentAs Actor1988–20...

 

Government from 1783 to 1801 led by William Pitt the Younger For other uses, see Pitt ministry. First Pitt ministry1783–1801Pitt by Thomas GainsboroughDate formed19 December 1783 (1783-12-19)Date dissolved14 March 1801 (1801-03-14)People and organisationsMonarchGeorge IIIPrime MinisterWilliam Pitt the YoungerTotal no. of members38 appointmentsMember partiesTory PartyWhig Party (1794–1801)Status in legislatureMinority (1783–1784)Majority (1784–1794)Majorit...

Quantum algorithm In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to im...

 

Species of plant Litsea garciae Young Litsea garciae tree at the Fairchild Tropical Botanic Garden, Miami, Florida Conservation status Least Concern  (IUCN 3.1)[1] Scientific classification Kingdom: Plantae Clade: Tracheophytes Clade: Angiosperms Clade: Magnoliids Order: Laurales Family: Lauraceae Genus: Litsea Species: L. garciae Binomial name Litsea garciaeVidal (1886) Synonyms[2] Cylicodaphne garciae (Vidal) Nakai Lepidadenia kawakamii (Hayata) Masam. Litsea grise...

 

County in Florida, United States County in FloridaHillsborough CountyCountyDowntown Tampa skyline FlagSealLogoLocation within the U.S. state of FloridaFlorida's location within the U.S.Coordinates: 27°55′N 82°21′W / 27.91°N 82.35°W / 27.91; -82.35Country United StatesState FloridaFoundedJanuary 25, 1834Named forWills Hill, Earl of HillsboroughSeatTampaLargest cityTampaArea • Total1,266 sq mi (3,280 km2) • Land1,0...

State park in Virginia, United States Seven Bends State ParkLocation of Seven Bends State ParkLocationShenandoah County, VirginiaNearest cityWoodstock, VirginiaCoordinates38°51′17″N 78°29′25″W / 38.854849°N 78.490395°W / 38.854849; -78.490395Area1,066 acres (431 ha)Established2019Governing bodyVirginia Department of Conservation and Recreation Seven Bends State Park is a state park in the U.S. state of Virginia, located approximately 2 miles ...

 

1947 history book The Unknown Revolution AuthorVolineSubjectRussian historyPublication date1947 The Unknown Revolution is a 1947 history of the Russian Revolution by Voline. Publication Voline finished the book in 1940 while in Marseilles.[1] After his death in 1945,[2] it was first published posthumously in 1947. Following 1968 events in France, the book was republished in French paperback without additional editorial content by Pierre Belfond [fr] as part of a s...

 

Artikel ini sebatang kara, artinya tidak ada artikel lain yang memiliki pranala balik ke halaman ini.Bantulah menambah pranala ke artikel ini dari artikel yang berhubungan atau coba peralatan pencari pranala.Tag ini diberikan pada Januari 2023. African Journal of International Affairs  Singkatan (ISO)Afr. J. Int. Aff.Disiplin ilmuHubungan internasionalBahasaInggris, PrancisDisunting olehAdebayo Olukoshi, Emmanuel Pondi, Tukumbi Lumumba Kasango, Cyril ObiDetail publikasiPenerbitCouncil fo...

مسجد حسينية معلومات عامة الموقع أبركوه[1]  القرية أو المدينة أبر كوه، محافظة يزد الدولة  إيران تعديل مصدري - تعديل   مسجد حسينية هو مسجد تاريخي يعود إلى عصر الدولة التيمورية، ويقع في أبر كوه.[2] مراجع ^ Wiki Loves Monuments monuments database، 2 نوفمبر 2017، QID:Q28563569 ^ Encyclopaedia of the Irania...

 

德爾塔(葡萄牙語:Delta),是巴西的城鎮,位於該國東南部,由米納斯吉拉斯州負責管轄,面積104平方公里,海拔高度500米,2010年人口8,089,人口密度每平方公里77.43人。 參考資料 IBGE Citybrazil Frigoletto Prefeitura de Campo Florido 这是一篇與巴西相關的地理小作品。您可以通过编辑或修订扩充其内容。查论编 查论编 米納斯吉拉斯州市鎮首府及最大城市:贝洛奥里藏特 巴巴塞�...