Feature (computer vision)

In computer vision and image processing, a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges or objects. Features may also be the result of a general neighborhood operation or feature detection applied to the image. Other examples of features are related to motion in image sequences, or to shapes defined in terms of curves or boundaries between different image regions.

More broadly a feature is any piece of information that is relevant for solving the computational task related to a certain application. This is the same sense as feature in machine learning and pattern recognition generally, though image processing has a very sophisticated collection of features. The feature concept is very general and the choice of features in a particular computer vision system may be highly dependent on the specific problem at hand.

Definition

There is no universal or exact definition of what constitutes a feature, and the exact definition often depends on the problem or the type of application. Nevertheless, a feature is typically defined as an "interesting" part of an image, and features are used as a starting point for many computer vision algorithms.

Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is repeatability: whether or not the same feature will be detected in two or more different images of the same scene.

Feature detection is a low-level image processing operation. That is, it is usually performed as the first operation on an image and examines every pixel to see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features. As a built-in pre-requisite to feature detection, the input image is usually smoothed by a Gaussian kernel in a scale-space representation and one or several feature images are computed, often expressed in terms of local image derivative operations.

Occasionally, when feature detection is computationally expensive and there are time constraints, a higher-level algorithm may be used to guide the feature detection stage so that only certain parts of the image are searched for features.

There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability.

When features are defined in terms of local neighborhood operations applied to an image, a procedure commonly referred to as feature extraction, one can distinguish between feature detection approaches that produce local decisions whether there is a feature of a given type at a given image point or not, and those who produce non-binary data as result. The distinction becomes relevant when the resulting detected features are relatively sparse. Although local decisions are made, the output from a feature detection step does not need to be a binary image. The result is often represented in terms of sets of (connected or unconnected) coordinates of the image points where features have been detected, sometimes with subpixel accuracy.

When feature extraction is done without local decision making, the result is often referred to as a feature image. Consequently, a feature image can be seen as an image in the sense that it is a function of the same spatial (or temporal) variables as the original image, but where the pixel values hold information about image features instead of intensity or color. This means that a feature image can be processed in a similar way as an ordinary image generated by an image sensor. Feature images are also often computed as integrated step in algorithms for feature detection.

Feature vectors and feature spaces

In some applications, it is not sufficient to extract only one type of feature to obtain the relevant information from the image data. Instead, two or more different features are extracted, resulting in two or more feature descriptors at each image point. A common practice is to organize the information provided by all these descriptors as the elements of one single vector, commonly referred to as a feature vector. The set of all possible feature vectors constitutes a feature space.[1]

A common example of feature vectors appears when each image point is to be classified as belonging to a specific class. Assuming that each image point has a corresponding feature vector based on a suitable set of features, meaning that each class is well separated in the corresponding feature space, the classification of each image point can be done using standard classification method.

Simplified example of training a neural network in object detection: The network is trained by multiple images that are known to depict starfish and sea urchins, which are correlated with "nodes" that represent visual features. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring textured sea urchin creates a weakly weighted association between them.
Subsequent run of the network on an input image (left):[2] The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two features. In addition, a shell that was not included in the training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a false positive result for sea urchin.
In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.

Another and related example occurs when neural network-based processing is applied to images. The input data fed to the neural network is often given in terms of a feature vector from each image point, where the vector is constructed from several different features extracted from the image data. During a learning phase, the network can itself find which combinations of different features are useful for solving the problem at hand.

Types

Edges

Edges are points where there is a boundary (or an edge) between two image regions. In general, an edge can be of almost arbitrary shape, and may include junctions. In practice, edges are usually defined as sets of points in the image that have a strong gradient magnitude. Furthermore, some common algorithms will then chain high gradient points together to form a more complete description of an edge. These algorithms usually place some constraints on the properties of an edge, such as shape, smoothness, and gradient value.

Locally, edges have a one-dimensional structure.

Corners/interest points

The terms corners and interest points are used somewhat interchangeably and refer to point-like features in an image, which have a local two-dimensional structure. The name "Corner" arose since early algorithms first performed edge detection, and then analyzed the edges to find rapid changes in direction (corners). These algorithms were then developed so that explicit edge detection was no longer required, for instance by looking for high levels of curvature in the image gradient. It was then noticed that the so-called corners were also being detected on parts of the image that were not corners in the traditional sense (for instance a small bright spot on a dark background may be detected). These points are frequently known as interest points, but the term "corner" is used by tradition[citation needed].

Blobs / regions of interest points

Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image that are too smooth to be detected by a corner detector.

Consider shrinking an image and then performing corner detection. The detector will respond to points that are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague. To a large extent, this distinction can be remedied by including an appropriate notion of scale. Nevertheless, due to their response properties to different types of image structures at different scales, the LoG and DoH blob detectors are also mentioned in the article on corner detection.

Ridges

For elongated objects, the notion of ridges is a natural tool. A ridge descriptor computed from a grey-level image can be seen as a generalization of a medial axis. From a practical viewpoint, a ridge can be thought of as a one-dimensional curve that represents an axis of symmetry, and in addition has an attribute of local ridge width associated with each ridge point. Unfortunately, however, it is algorithmically harder to extract ridge features from general classes of grey-level images than edge-, corner- or blob features. Nevertheless, ridge descriptors are frequently used for road extraction in aerial images and for extracting blood vessels in medical images—see ridge detection.

Detection

Feature detection includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.

The extraction of features are sometimes made over several scalings. One of these methods is the scale-invariant feature transform (SIFT).

Common feature detectors and their classification:
Feature detector Edge Corner Blob Ridge
Canny[3] Yes No No No
Sobel Yes No No No
Harris & Stephens / Plessey[4] Yes Yes No No
SUSAN[5] Yes Yes No No
Shi & Tomasi[6] No Yes No No
Level curve curvature[7] No Yes No No
FAST[8] No Yes No No
Laplacian of Gaussian[7] No Yes Yes No
Difference of Gaussians[9][10] No Yes Yes No
Determinant of Hessian[7] No Yes Yes No
Hessian strength feature measures[11][12] No Yes Yes No
MSER[13] No No Yes No
Principal curvature ridges[14][15][16] No No No Yes
Grey-level blobs[17] No No Yes No

Extraction

Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector. Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection.

Low-level

Curvature

Image motion

Shape based

Flexible methods

  • Deformable, parameterized shapes
  • Active contours (snakes)

Representation

A specific image feature, defined in terms of a specific structure in the image data, can often be represented in different ways. For example, an edge can be represented as a Boolean variable in each image point that describes whether an edge is present at that point. Alternatively, we can instead use a representation that provides a certainty measure instead of a Boolean statement of the edge's existence and combine this with information about the orientation of the edge. Similarly, the color of a specific region can either be represented in terms of the average color (three scalars) or a color histogram (three functions).

When a computer vision system or computer vision algorithm is designed the choice of feature representation can be a critical issue. In some cases, a higher level of detail in the description of a feature may be necessary for solving the problem, but this comes at the cost of having to deal with more data and more demanding processing. Below, some of the factors which are relevant for choosing a suitable representation are discussed. In this discussion, an instance of a feature representation is referred to as a feature descriptor, or simply descriptor.

Certainty or confidence

Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation that includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable.

In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information about certainty or confidence. This enables a new feature descriptor to be computed from several descriptors, for example, computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise.

Averageability

In addition to having certainty measures included in the representation, the representation of the corresponding feature values may itself be suitable for an averaging operation or not. Most feature representations can be averaged in practice, but only in certain cases can the resulting descriptor be given a correct interpretation in terms of a feature value. Such representations are referred to as averageable.

For example, if the orientation of an edge is represented in terms of an angle, this representation must have a discontinuity where the angle wraps from its maximal value to its minimal value. Consequently, it can happen that two similar orientations are represented by angles that have a mean that does not lie close to either of the original angles and, hence, this representation is not averageable. There are other representations of edge orientation, such as the structure tensor, which are averageable.

Another example relates to motion, where in some cases only the normal velocity relative to some edge can be extracted. If two such features have been extracted and they can be assumed to refer to same true velocity, this velocity is not given as the average of the normal velocity vectors. Hence, normal velocity vectors are not averageable. Instead, there are other representations of motions, using matrices or tensors, that give the true velocity in terms of an average operation of the normal velocity descriptors.[citation needed]

Matching

Features detected in each image can be matched across multiple images to establish corresponding features such as corresponding points.

The algorithm is based on comparing and analyzing point correspondences between the reference image and the target image. If any part of the cluttered scene shares correspondences greater than the threshold, that part of the cluttered scene image is targeted and considered to include the reference object there.[18]

See also

References

  1. ^ Scott E Umbaugh (27 January 2005). Computer Imaging: Digital Image Analysis and Processing. CRC Press. ISBN 978-0-8493-2919-7.
  2. ^ Ferrie, C., & Kaiser, S. (2019). Neural Networks for Babies. Sourcebooks. ISBN 1492671207.{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. ^ Canny, J. (1986). "A Computational Approach To Edge Detection". IEEE Transactions on Pattern Analysis and Machine Intelligence. 8 (6): 679–714. doi:10.1109/TPAMI.1986.4767851. PMID 21869365. S2CID 13284142.
  4. ^ C. Harris; M. Stephens (1988). "A combined corner and edge detector" (PDF). Proceedings of the 4th Alvey Vision Conference. pp. 147–151. Archived from the original (PDF) on 2022-04-01. Retrieved 2021-02-11.
  5. ^ S. M. Smith; J. M. Brady (May 1997). "SUSAN - a new approach to low level image processing". International Journal of Computer Vision. 23 (1): 45–78. doi:10.1023/A:1007963824710. S2CID 15033310.
  6. ^ J. Shi; C. Tomasi (June 1994). "Good Features to Track". 9th IEEE Conference on Computer Vision and Pattern Recognition. Springer.
  7. ^ a b c T. Lindeberg (1998). "Feature detection with automatic scale selection" (abstract). International Journal of Computer Vision. 30 (2): 77–116. doi:10.1023/A:1008045108935. S2CID 723210.
  8. ^ E. Rosten; T. Drummond (2006). "Machine learning for high-speed corner detection". European Conference on Computer Vision. Springer. pp. 430–443. CiteSeerX 10.1.1.60.3991. doi:10.1007/11744023_34.
  9. ^ J. L. Crowley and A. C. Parker, "A Representation for Shape Based on Peaks and Ridges in the Difference of Low Pass Transform[dead link]", IEEE Transactions on PAMI, PAMI 6 (2), pp. 156–170, March 1984.
  10. ^ D. Lowe (2004). "Distinctive Image Features from Scale-Invariant Keypoints". International Journal of Computer Vision. 60 (2): 91. CiteSeerX 10.1.1.73.2924. doi:10.1023/B:VISI.0000029664.99615.94. S2CID 221242327.
  11. ^ T. Lindeberg "Scale selection properties of generalized scale-space interest point detectors", Journal of Mathematical Imaging and Vision, Volume 46, Issue 2, pages 177-210, 2013.
  12. ^ T. Lindeberg ``Image matching using generalized scale-space interest points", Journal of Mathematical Imaging and Vision, volume 52, number 1, pages 3-36, 2015.
  13. ^ J. Matas; O. Chum; M. Urban; T. Pajdla (2002). "Robust wide baseline stereo from maximally stable extremum regions" (PDF). British Machine Vision Conference. pp. 384–393.
  14. ^ R. Haralick, "Ridges and Valleys on Digital Images", Computer Vision, Graphics, and Image Processing vol. 22, no. 10, pp. 28–38, Apr. 1983.
  15. ^ D. Eberly, R. Gardner, B. Morse, S. Pizer, C. Scharlach, Ridges for image analysis, Journal of Mathematical Imaging and Vision, v. 4 n. 4, pp. 353–373, Dec. 1994.
  16. ^ T. Lindeberg (1998). "Edge detection and ridge detection with automatic scale selection" (abstract). International Journal of Computer Vision. 30 (2): 117–154. doi:10.1023/A:1008097225773. S2CID 207658261.
  17. ^ T. Lindeberg (1993). "Detecting Salient Blob-Like Image Structures and Their Scales with a Scale-Space Primal Sketch: A Method for Focus-of-Attention" (abstract). International Journal of Computer Vision. 11 (3): 283–318. doi:10.1007/BF01469346. S2CID 11998035.
  18. ^ "Object Detection in a Cluttered Scene Using Point Feature Matching - MATLAB & Simulink". www.mathworks.com. Retrieved 2019-07-06.

Further reading

Read other articles:

This list is incomplete; you can help by adding missing items. (August 2008) Features overview This section is empty. You can help by adding to it. (March 2023) Desktop processors Duron Spitfire (Model 3, 180 nm) All models support: MMX, Enhanced 3DNow! Duron 600 Model Number Frequency L2-Cache FSB[a] Multiplier Voltage TDP Release Date Release Price Order Part Number Duron 600 600 MHz 64 KB 200 MT/s 6x 1.6 V 27.4 W June 19, 2000 $112 D600AUT1B Duron 650 650 MHz 64 KB 200 M...

 

 

Часть серии статей о Холокосте Идеология и политика Расовая гигиена · Расовый антисемитизм · Нацистская расовая политика · Нюрнбергские расовые законы Шоа Лагеря смерти Белжец · Дахау · Майданек · Малый Тростенец · Маутхаузен ·&...

 

 

Artikel ini tidak memiliki referensi atau sumber tepercaya sehingga isinya tidak bisa dipastikan. Tolong bantu perbaiki artikel ini dengan menambahkan referensi yang layak. Tulisan tanpa sumber dapat dipertanyakan dan dihapus sewaktu-waktu.Cari sumber: SMK Negeri 29 Jakarta – berita · surat kabar · buku · cendekiawan · JSTOR SMK Negeri 29 Penerbangan JakartaSTM Penerbangan/STM Kapal Blok MInformasiDidirikan1953JenisSekolah Menengah KejuruanAkreditasiTe...

Japanese green teaFor the Irish mythological character, see Sencha mac Ailella. For the mobile user interface, see Sencha Touch. Sencha煎茶TypeGreenOther namesSteeped TeaOriginJapanQuick descriptionVery popular in Japan.Temperature80 °C (175 °F)[1]Time1 min Steamed teas such as sencha produce a cloudy, richly coloured liquid. Sencha (煎茶, lit. 'infused tea') is a type of Japanese ryokucha (緑茶, green tea) which is prepared by infusing the processed w...

 

 

City on the Crimean peninsula Sebastopol redirects here. For other uses, see Sebastopol (disambiguation) and Sevastopol (disambiguation). Place in RussiaSevastopol Севастополь (Russian)Акъя́р / Aqyar (Crimean Tatar)Севастополь (Ukrainian) FlagCoat of armsAnthem: Legendary SevastopolOrthographic projection of Sevastopol (in green)Map of the Crimean Peninsula with Sevastopol highlightedSevastopolLocation of Sevastopol within CrimeaShow map of CrimeaSev...

 

 

Falling in LoveLagu oleh 2NE1Dirilis08 Juli 2013 (2013-07-08)[1]FormatUnduhan digitalDirekam2012GenreK-popDurasi3:46LabelYG EntertainmentKT MusicProduserTeddy ParkVideo musikFalling in Love di YouTube Falling in Love adalah lagu dari grup vokal wanita asal Korea Selatan 2NE1. Singel ini dirilis pada tanggal 8 Juli 2013 sebagai singel hanya digital saja. Video musik diunggah ke YouTube sehari sebelumnya.[2] Video mencapai lebih dari satu juta kali ditonton dalam waktu kura...

American New Deal work-relief project (1935–1943) Historical Records SurveyWPA HRS publications from TN, ME, UT & DCAgency overviewFormed1935 (1935)Dissolved1943Parent agencyWorks Progress Administration The Historical Records Survey (HRS) was a project of the Works Progress Administration New Deal program in the United States. Originally part of the Federal Writers' Project, it was devoted to surveying and indexing historically significant records in state, county and local archiv...

 

 

Синелобый амазон Научная классификация Домен:ЭукариотыЦарство:ЖивотныеПодцарство:ЭуметазоиБез ранга:Двусторонне-симметричныеБез ранга:ВторичноротыеТип:ХордовыеПодтип:ПозвоночныеИнфратип:ЧелюстноротыеНадкласс:ЧетвероногиеКлада:АмниотыКлада:ЗавропсидыКласс:Пт�...

 

 

Habermas nel 2008 Jürgen Habermas (IPA: [ˈjʏrɡn̩ ˈhaːbɐmaːs][1]; Düsseldorf, 18 giugno 1929) è un filosofo, sociologo, politologo ed epistemologo tedesco, tra i principali esponenti della Scuola di Francoforte (culla della teoria critica). Nei suoi scritti occupano una posizione centrale le tematiche epistemologiche inerenti alla fondazione delle scienze sociali reinterpretate alla luce della svolta linguistica della filosofia contemporanea; l'analisi delle società industr...

منظمة الدفاع عن الأطفال البلد الولايات المتحدة  المقر الرئيسي واشنطن العاصمة  تاريخ التأسيس 1973 المؤسس ماريان إيدلمان  الوضع القانوني منظمة 501(c)(3)[1]  المالية إجمالي الإيرادات 19095858 دولار أمريكي (2016)17286024 دولار أمريكي (2017)[2]  الموقع الرسمي الموقع الرسمي ...

 

 

British military operation to tackle the COVID-19 pandemic Operation BroadsharePart of COVID-19 pandemicA Royal Air Force Boeing C-17 Globemaster III transport aircraft delivering medical aid to Ghana.LocationBritish Overseas Territories and Overseas military bases of the United KingdomObjectiveMaintain public order, assist public services and civilian authorities in tackling the Coronavirus outbreak.Date23 March 2020 (2020-03-23) –Executed by United Kingdom Part of ...

 

 

Cet article est une ébauche concernant une élection ou un référendum luxembourgeois. Vous pouvez partager vos connaissances en l’améliorant (comment ?) selon les recommandations des projets correspondants. 1918 1922 Élections législatives luxembourgeoises de 1919 48 sièges de la Chambre des députés(Majorité absolue : 25 sièges) 26 octobre 1919 Type d’élection Élections législatives Corps électoral et résultats Population 261 643[2] Inscrits 126 1...

Bat-and-ball team sport originating in England This article is about the game. For other uses, see Rounder. RoundersA game of rounders on Christmas Day at Baroona, Glamorgan Vale, Australia in 1913.Highest governing bodyRounders England (England), GAA Rounders (Ireland), a division of the Gaelic Athletic Association[1]First playedEngland, 1500s (unified rules 1884)CharacteristicsTeam members2 teams of 6–15 Rounders is a bat-and-ball game played between two teams. Rounders is a ...

 

 

Former Canadian and later American military base in Kandahar, Afghanistan Camp Nathan SmithKandahar in AfghanistanCamp Nathan SmithLocation in AfghanistanCoordinates31°37′47″N 65°44′08″E / 31.62972°N 65.73556°E / 31.62972; 65.73556TypeMilitary army baseSite informationOwnerDepartment of National DefenceOperator Canadian Armed Forces2003 (2003) – 2011Site historyBuilt2003 (2003)Built by United States ArmyIn use2003 (...

 

 

TNI Angkatan Laut atau TNI AL adalah kesatuan dalam Tentara Nasional Indonesia yang seluruh prajuritnya diklasifikasikan ke dalam beberapa korps/kejuruan. Berikut daftar kecabangan atau Korps di TNI Angkatan Laut: Korps Pelaut Laut (P) Korps Pelaut merupakan salah satu kecabangan utama di TNI Angkatan Laut. Setiap prajurit Korps Pelaut wajib Memiliki kemampuan dasar sebagai Perwira, Bintara dan Tamtama Korps Pelaut sesuai dengan perannya dalam penugasan di lapangan. melaksanakan tugas sebagai...

Protein-coding gene in the species Homo sapiens POU3F4IdentifiersAliasesPOU3F4, BRAIN-4, BRN-4, BRN4, DFN3, DFNX2, OCT-9, OTF-9, OTF9, POU class 3 homeobox 4External IDsOMIM: 300039; MGI: 101894; HomoloGene: 260; GeneCards: POU3F4; OMA:POU3F4 - orthologsGene location (Human)Chr.X chromosome (human)[1]BandXq21.1Start83,508,290 bp[1]End83,512,127 bp[1]Gene location (Mouse)Chr.X chromosome (mouse)[2]BandX E1|X 48.2 cMStart109,857,886 bp[2]End109,860,8...

 

 

Mathematical term This article is about the mathematical term. For slope of a physical feature, see Grade (slope). For other uses, see Slope (disambiguation). Slope: m = Δ y Δ x = tan ⁡ ( θ ) {\displaystyle m={\frac {\Delta y}{\Delta x}}=\tan(\theta )} In mathematics, the slope or gradient of a line is a number that describes the direction and steepness of the line.[1] Often denoted by the letter m, slope is calculated as the ratio of the vertical change to...

 

 

Untuk sahabat lain bernama Zaid, lihat Zaid (disambiguasi). Zaid bin HaritsahKaligrafi nama Zaid bin HaritsahNama asalزيد بن حارثةLahirca 581 (47 sebelum hijrah)Meninggal629 (8 H, usia 48)Sebab meninggalSyahid di Pertempuran Mu'tahKebangsaanSuku QuraisyBanu KalbDikenal atasDisebutkan dalam AlquranAnakUsamahOrang tuaHaritsah bin Syarahil Zaid bin Haritsah (bahasa Arab: زيد بن حارثة, lahir tahun 47 sebelum hijrah (ca 581) - mati 8 H (629, usia 48)) adalah sahab...

De facto leader of the Soviet Union General Secretary of the Communist Party of the Soviet UnionГенеральный секретарь ЦК КПССEmblem of the Communist Party of the Soviet UnionLongest servingJoseph Stalin3 April 1922 – 16 October 1952,de facto 5 March 1953Central Committee of the Communist PartyStyleComrade General Secretary(informal)TypeParty leaderStatusCountry leaderMember ofPolitburo and SecretariatResidenceKremlin Senate[1]SeatKremlin, MoscowAppointerCen...

 

 

Schools of martial arts in Vietnam This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Vietnamese martial arts – news · newspapers · books · scholar · JSTOR (October 2016) (Learn how and when to remove this message) Flying scissors to the neck. The opponent is forced to the ground with a twist of the body. Vietn...