filename
stringclasses 1
value | title
stringlengths 3
66
| text
stringlengths 41
12.8k
⌀ |
---|---|---|
mil_tactics_continued_pretraining.csv | Foot drill | Drill in antiquity: Vegetius composed his treatise on the Roman Empire's military, De Re Militari, at some point between 378 and 390 CE during the reign of Valentinian II in the Western Roman Empire. This work consists of three separate, yet related books, the first establishing methods of training and selecting new recruits, the second and third books a continuation of the first, describing in detail training and discipline matters as they pertained not only to the troops, but also to the leadership in times of training and battle, as well as positing an argument for reforms in the army.
Within these books can be found a detailed guide for drill of the army. Among these drills, the military step describes how initial training should consist of "constant practice of marching quick and together. Nor is anything of more consequence either on the march or in the line than that they should keep their ranks with the greatest exactness. For troops who march in an irregular and disorderly manner are always in great danger of being defeated. They should march with the common military step twenty miles in five summer-hours, and with the full step, which is quicker, twenty-four miles in the same number of hours. If they exceed this pace, they no longer march but run, and no certain rate can be assigned."
History of drill: Drill became less common after the fall of the Western Roman Empire and the resultant disappearance of professional armies from Western Europe. In the Middle Ages the individualist nature of Knightly combat, focusing on individual skills and heroism, coupled with the ad-hoc nature of the supporting levies meant that there was no place for mass subordination of troops through Drill. The rise of the mercenary during the renaissance led to some level of military professionalisation: this led to co-ordinated and practiced military units such as the Swiss mercenaries but standardisation was still lacking.
The mass use of firearms in the later 16th Century led to the resurgence of what was considered at the time "Roman-Style Drill". This movement was pioneered by Maurice of Nassau. Intended to enable his soldiers to efficiently handle their firearms, it describes forty-two movements from taking up the weapon to firing. As armies became full-time and more professionalised over the course of the 17th Century it became a natural progression for drill to expand its remit from weapons handling to the manoeuvre and forming of bodies of troops. The most notable figure of the early 17th Century was Gustavus Adolphus, who fielded one of the largest standing armies of the Thirty Years' War before his death in battle.
What would today be known strictly as foot drill emerged over the course of the 17th Century. This period is known as the Pike and Shot period, where muskets and arquebus without bayonets were defended from infantry and cavalry by blocks of pikemen. The requirement for quick and accurate movement of these large bodies of troops in order to outmanoeuvre their opponents on the tactical level led to the introduction of standardised movements and commands. These were the first versions of foot drill, intended to allow a group of disparate individuals to form one organised body of men, moving singlemindedly with united purpose. Additionally, in the confusion of battle it was found that the clear and concise nature of drill commands allowed the individual soldier to cope with the psychological stresses of battle. The apogee of this style of warfare is arguably the English Civil War, as the last major war using these methods before the introduction of the bayonet created "The Queen of Battles": the Line Infantry.
Line infantry won or lost on the rigidity of their foot drill. In the later 17th century that drill evolved into a tool for the complete subordination of the individual. The Prussians demanded automatonic levels of drill competence. Constant and heavy drilling would change a man from a civilian to a soldier, obedient to commands reflexively. This instituted both discipline and subordination. In a period when private soldiers were recruited from what was considered the basest social class, it was considered particularly important to "break the man" into service. For all this harshness, desertion remained commonplace.
In battle, drill was a force multiplier. With the muskets of the era having short ranges owing to the nature of their ammunition and the reluctance of men to kill one another at short range, it was necessary for battalions to form up as broad lines 2 to 4 ranks deep at distances averaging 25 yards (approx 20 m). In such conditions, particularly when one considers the nightmarish nature of the ubiquitous cannonade and the buildup of smoke from musket discharge, drill allowed the soldier to withdraw into himself and react to commands. There are anecdotal reports of soldiers in this almost trance-like state reaching out to try and catch cannonballs at the end of their arcs, with unpleasant results. The psychological boost which being part of an effectively faceless mass and surrendering one's fate to that of the corporate group provided enabled men to stand in the face of the enemy that bit longer than their foes. As such, the better the drill, the better – in theory – the soldiers. These elements were found to be particularly powerful in colonial theatres by most European states, where massed drill and the discipline that imbued allowed small expeditionary forces to repeatedly defeat larger indigenous forces.
Additionally, greater drill equated to greater manoeuvrability. When troops were thoroughly drilled they could move confidently at speed without their formations – carefully proscribed in order to maximise the use of their weapons – breaking up, particularly over rough ground. When formations broke up precious time would have to be spent reforming them in the face of the enemy: additionally, loose formations breed confusion. The difference between a body of troops and a disorganised crowd is a narrow one. As such, when faced with musketry, cavalry or cannonade a loose formation would be more prone to succumbing to panic and rout. Proficiency in drill further enabled the creativity of generals. Troops who are new to drill are unconfident and tend to panic or become confused when new commands are introduced. Troops who do many drills can more easily be taught new formations, building off the base of experience previously garnered. In a period when all war was foot drill, this could obviously prove an advantage. As an example, the British used an unorthodox two rank line during the later 18th and early-to-mid 19th Centuries as a force multiplier. In the Peninsular Campaign they were able to adapt this formation from strictly linear to a shallow crescent. Coordinating even a minor formation change for roughly 200 men was considered an impressive feat.
Drill was exported to the rest of the world on the back of Colonial victories, with most Imperial nations training local armed forces in European-style drill. One famous example of this trend were the Indian Sepoys of the British Empire.
As weapons gained in range and accuracy, foot drill became less and less important in battle. Advances as formed lines and columns were still attempted - they worked during the Crimean War but were becoming dangerously obsolete by the time of the Franco-Prussian War. The last widespread use of formed infantry in the attack, particularly in columns, was in the first few weeks of the First World War.
Origins of modern drill in the U.S. Military: United States military drill originated in 1778, as part of a training program implemented by Baron Friedrich von Steuben to improve the discipline and organisation of soldiers serving in the Continental Army. The following year Baron von Steuben, by then a Major General and the Inspector General of the Continental Army, wrote the Army's first field manual, "The Regulations for the Order and Discipline of the Troops of the United States", which has come to be more commonly known as the "Blue Book". The methods of drill that von Steuben initiated remained largely unchanged between their inception and the time of the American Civil War. One major change to come about since that time is that troops now march at a cadence of 120 steps per minute, instead of the original 76 steps per minute at the time of the American Revolution.
The stated aim of drill is to "enable a commander or noncommissioned officer to move his unit from one place to another in an orderly manner; to aid in disciplinary training by instilling habits of precision and response to the leader’s orders; and to provide for the development of all soldiers in the practice of commanding troops."
Between branches of the military, as well as between the military forces of various countries, the methods of drill will vary. In the United States Armed Forces, the basis of drill procedures can be traced to von Steuben's "Blue Book".
Drill in the modern day: Drill today is used as a teaching tool for instilling discipline into new recruits in armies the world over, although style and diligence varies from nation to nation. Some of the most famous drill in the world remains that of the Guards Division.
Drill is most commonly seen at ceremonial and public functions and has evolved into something of an art-form. Many nations have dedicated Drill Teams, although the Guards Division, faithful to the history of Foot Drill, remain full service combat infantry.
References:
External links: E-text of English translation of De Re Militari
Homepage of De Re Militari: The Society for Medieval Military History, an academic association that is concerned about medieval warfare
An English translation of De Re Militari by Lieutenant John Clarke (1767)
Army Study Guide.com Pubs and Forms |
mil_tactics_continued_pretraining.csv | Force multiplication | History: Notable historical examples of force multiplication include:
Fortifications: e.g. the Theodosian Wall of Constantinople
Reliance on air force by the Coalition in the Gulf War and the 2003 invasion of Iraq
Doctrinal changes: During the First World War, the Germans experimented with what were called "storm tactics" in which a small group of highly-trained soldiers (stormtroopers) would open a salient through which much larger forces could penetrate. That met with only limited success by breaking through the first lines of defence but lacking the staying power to break the opposing forces entirely. The 1939 blitzkrieg, which broke through with coordinated mechanized ground forces with aircraft in close support, was vastly more effective.
Towards the end of the Second World War, the German Army introduced Kampfgruppe combat formations, which were composed of whatever units happened to be available. Though poor quality ones generally constituted the major part of them, they often performed successfully because of their high degree of flexibility and adaptability. Mission-type tactics, as opposed to extremely specific directives, which give no discretion to the junior commander, are now widely used by modern militaries because of their force multiplication. Originating from German concepts of Auftragstaktik, those tactics may be developing even more rapidly in the concept of network-centric warfare (NCW) in which subordinate commanders receive information not only from their own commanders but also from adjacent units.
A different paradigm was one of the results of the theories of John Boyd, the "high-low mix" in which a large number of less expensive aircraft, coupled with a small number of extremely capable "silver bullet" aircraft, had the effect of a much larger force. Boyd's concept of quick action is based on the repeated application of the "Boyd loop", consisting of the steps
Observe: make use of the best sensors and other intelligence available
Orient: put the new observations into a context with the old
Decide: select the next action based on the combined observation and local knowledge
Act: carry out the selected action, ideally while the opponent is still observing your last action.
Boyd's concept is also known as the OODA Loop and is a description of the decision-making process that Boyd contended applies to business, sports, law enforcement and military operations. Boyd's doctrine is widely taught in the American military, and one of the aims of network centric warfare is to "get inside his OODA loop." In other words, one should go from observation to action before the enemy can get past orientation, preventing him from ever being able to make an effective decision or put it into action. Small unit leadership is critical to this, and NCW's ability to disseminate information to small unit leaders enables such tactics.
Network-centric warfare can provide additional information and can help prevent friendly fire but also allows "swarm tactics"
and the seizing of opportunities by subordinate forces. (Edwards 2000, p. 2) defines "a swarming case is any historical example in which the scheme of maneuver involves the convergent attack of five (or more) semiautonomous (or autonomous) units on a targeted force in some particular place. "Convergent" implies an attack from most of the points on the compass."
Another version of "swarming" is evident in air-to-ground attack formations in which the attack aircraft do not approach from one direction, at one time, or at the same altitude, but schedule the attacks so each one requires a Boyd-style OODA iteration to deal with a new threat. Replacement training units (RTU) were "finishing schools" for pilots that needed to know not just the school solution, but the actual tactics being used in Vietnam. Referring to close air support, "In the RTU, new pilots learned the rules of the road for working with a forward air controller (FAC). The hardest part was finding the small aircraft as it circled over the target area. The fast-moving fighters used directional finding/steering equipment to get close enough to the slow, low FAC until someone in the flight could get an eyeball on him—a tally-ho. Once the FAC was in sight, he would give the fighters a target briefing—type of target, elevation, attack heading, location of friendlies, enemy defensive fire, best egress heading if hit by enemy fire, and other pertinent data. Usually the fighters would set up a circle, called a wheel or "wagon wheel", over the FAC, and wait for him to mark the target. Once the target was marked, the flight leader would attack first.
Psychology: Napoleon is well known for his comment "The moral is to the physical as three to one." Former United States Secretary of State and Chairman of the Joint Chiefs of Staff Colin Powell has said: "Perpetual optimism is a force multiplier." Morale, training, and ethos have long been known to result in disproportionate effects on the battlefield.
Psychological warfare can target the morale, politics, and values of enemy soldiers and their supporters to effectively neutralize them in a conflict.
Protecting local cultural heritage sites and investing in the relationships between local civilians and military forces can be seen as force multipliers leading to benefits in meeting or sustaining military objectives.
Technology: Ranged weapons that hit their target can be far more effective than those that miss. That is why rifled muskets for infantry and rangefinders for artillery became commonplace in the 19th century.
Two new weapons of World War I, barbed wire and the machine gun, multiplied defensive forces, leading to the stalemate of trench warfare.
Aircraft carriers: Aircraft carriers, such as the USS Gerald R. Ford, can carry more than 75 aircraft with fuel and ammunition for all tasks that an aircraft carrier should need like air to air, air to naval and air to ground missions. When deployed, aircraft carriers are a massive force multiplier that can turn any engagement in favour of those that have the aircraft carrier. Carriers can hold different type of aircraft to different usage meaning the force multiplier can vary depending on the specific task at hand.
Tankers: Airborne tanker aircraft, such as the Boeing KC-135 are a very significant force multiplier. They can carry fuel so bomber and fighter aircraft can take off loaded with extra weapons instead of full fuel tanks. The tankers also increase the range and time loitering within or near the target areas by off-loading fuel when it is needed. Tankers can also be used to rapidly deploy fighters, bombers, SIGNET, Airborne Command Post, and cargo aircraft from the United States to the areas where they are needed. The force multiplier of a KC-135R can be anywhere from 1.5 to as much as 6 when used near the target area.
Bombers: At one extreme, a stealth aircraft like the Northrop Grumman B-2 Spirit strategic bomber can attack a target without needing the large numbers of escort fighter aircraft, electronic-warfare aircraft, Suppression of Enemy Air Defenses, and other supporting aircraft that would be needed were conventional bombers used against the same target.
Precision-guided munitions (PGM) give an immense multiplication. The Thanh Hoa Bridge in North Vietnam had been only mildly damaged by approximately 800 sorties by aircraft armed with conventional Unguided bombs, but had one of its spans destroyed by a 12-plane mission, of which 8 carried laser-guided bombs. Two small subsequent missions, again with laser-guided bombs, completed the destruction of this target. Precision guided munitions are one example of what has been called the Revolution in Military Affairs. In World War II, British night bombers could hit, at best, an area of a city.
Modern PGMs commonly put a bomb within 3–10 meters of its target (see Circular error probable), and most carry an explosive charge significant enough that this uncertainty is effectively voided. See the use of heavy bombers in direct support of friendly troops in Afghanistan, using the technique of Ground-Aided Precision Strike.
Fighter combat: Fighter aircraft coordinated by an AWACS control aircraft, so that they can approach targets without being revealed by their own radar, and who are assigned to take specific targets so that duplication is avoided, are far more effective than an equivalent number of fighters dependent on their own resources for target acquisition.
In exercises between the Indian and US air forces, the Indian pilots had an opportunity to operate with AWACS control, and found it extremely effective. India has ordered AWACS aircraft, using Israeli Phalcon electronics on a Russian airframe, and this exercise is part of their preparation. Officer and pilot comments included "definitely was a force multiplier. Giving you an eye deep beyond you". "We could pick up incoming targets whether aircraft or missiles almost 400 kilometers away. It gives a grand battle coordination in the air".
Creating local forces: The use of small numbers of specialists to create larger effective forces is another form of multiplication. The basic A Team of US Army Special Forces is a 12-man unit that can train and lead a company-sized unit (100–200 men) of local guerrillas.
Deception: Deception can produce the potential effect of a much larger force. The fictitious First United States Army Group (FUSAG) was portrayed to the World War II Germans as the main force for the invasion of Europe. Operation Bodyguard
successfully gave the impression that FUSAG was to land at the Pas de Calais, convincing the Germans that the real attack at Normandy was a feint. |
mil_tactics_continued_pretraining.csv | Force multiplication | Officer and pilot comments included "definitely was a force multiplier. Giving you an eye deep beyond you". "We could pick up incoming targets whether aircraft or missiles almost 400 kilometers away. It gives a grand battle coordination in the air".
Creating local forces: The use of small numbers of specialists to create larger effective forces is another form of multiplication. The basic A Team of US Army Special Forces is a 12-man unit that can train and lead a company-sized unit (100–200 men) of local guerrillas.
Deception: Deception can produce the potential effect of a much larger force. The fictitious First United States Army Group (FUSAG) was portrayed to the World War II Germans as the main force for the invasion of Europe. Operation Bodyguard
successfully gave the impression that FUSAG was to land at the Pas de Calais, convincing the Germans that the real attack at Normandy was a feint. As a result of the successful deception, the Normandy force penetrated deeply, in part, because the Germans held back strategic reserves that they thought would be needed at the Pas de Calais, against what was a nonexistent force. FUSAG's existence was suggested by the use of decoy vehicles that the Allies allowed to be photographed, fictitious radio traffic generated by a small number of specialists, and the Double Cross System. Double Cross referred to turning all surviving German spies in the UK into double agents, who sent back convincing reports that were consistent with the deception programs being conducted by the London Controlling Section.
See also: Asymmetric warfare
C4ISTAR
Lanchester's laws
List of established military terms
Network-centric warfare
== References == |
mil_tactics_continued_pretraining.csv | Fortification | Nomenclature: Many United States Army installations are known as forts, although they are not always fortified. During the pioneering era of North America, many outposts on the frontiers, even non-military outposts, were referred to generically as forts. Larger military installations may be called fortresses; smaller ones were once known as fortalices. The word fortification can refer to the practice of improving an area's defense with defensive works. City walls are fortifications but are not necessarily called fortresses.
The art of setting out a military camp or constructing a fortification traditionally has been called castrametation since the time of the Roman legions. Laying siege to a fortification and of destroying it is commonly called siegecraft or siege warfare and is formally known as poliorcetics. In some texts, this latter term also applies to the art of building a fortification.
Fortification is usually divided into two branches: permanent fortification and field fortification. Permanent fortifications are erected at leisure, with all the resources that a state can supply of constructive and mechanical skill, and are built of enduring materials. Field fortifications—for example breastworks—and often known as fieldworks or earthworks, are extemporized by troops in the field, perhaps assisted by such local labour and tools as may be procurable and with materials that do not require much preparation, such as soil, brushwood, and light timber, or sandbags (see sangar). An example of field fortification was the construction of Fort Necessity by George Washington in 1754.
There is also an intermediate branch known as semi-permanent fortification. This is employed when in the course of a campaign it becomes desirable to protect some locality with the best imitation of permanent defences that can be made in a short time, ample resources and skilled civilian labour being available. An example of this is the construction of Roman forts in England and in other Roman territories where camps were set up with the intention of staying for some time, but not permanently.
Castles are fortifications which are regarded as being distinct from the generic fort or fortress in that it describes a residence of a monarch or noble and commands a specific defensive territory. An example of this is the massive medieval castle of Carcassonne.
History:
Early uses: Defensive fences for protecting humans and domestic animals against predators was used long before the appearence of writing and began "perhaps with primitive man blocking the entrances of his caves for security from large carnivores".
From very early history to modern times, walls have been a necessity for many cities. Amnya Fort in western Siberia has been described by archaeologists as one of the oldest known fortified settlements, as well as the northernmost Stone Age fort. In Bulgaria, near the town of Provadia a walled fortified settlement today called Solnitsata starting from 4700 BC had a diameter of about 300 feet (91 m), was home to 350 people living in two-storey houses, and was encircled by a fortified wall. The huge walls around the settlement, which were built very tall and with stone blocks which are 6 feet (1.8 m) high and 4.5 feet (1.4 m) thick, make it one of the earliest walled settlements in Europe but it is younger than the walled town of Sesklo in Greece from 6800 BC.
Uruk in ancient Sumer (Mesopotamia) is one of the world's oldest known walled cities. The Ancient Egyptians also built fortresses on the frontiers of the Nile Valley to protect against invaders from neighbouring territories, as well as circle-shaped mud brick walls around their cities. Many of the fortifications of the ancient world were built with mud brick, often leaving them no more than mounds of dirt for today's archaeologists. A massive prehistoric stone wall surrounded the ancient temple of Ness of Brodgar 3200 BC in Scotland. Named the "Great Wall of Brodgar" it was 4 metres (13 ft) thick and 4 metres tall. The wall had some symbolic or ritualistic function. The Assyrians deployed large labour forces to build new palaces, temples and defensive walls.
Bronze Age Europe: In Bronze Age Malta, some settlements also began to be fortified. The most notable surviving example is Borġ in-Nadur, where a bastion built in around 1500 BC was found. Exceptions were few—notably, ancient Sparta and ancient Rome did not have walls for a long time, choosing to rely on their militaries for defence instead. Initially, these fortifications were simple constructions of wood and earth, which were later replaced by mixed constructions of stones piled on top of each other without mortar. In ancient Greece, large stone walls had been built in Mycenaean Greece, such as the ancient site of Mycenae (famous for the huge stone blocks of its 'cyclopean' walls). In classical era Greece, the city of Athens built two parallel stone walls, called the Long Walls, that reached their fortified seaport at Piraeus a few miles away.
In Central Europe, the Celts built large fortified settlements known as oppida, whose walls seem partially influenced by those built in the Mediterranean. The fortifications were continuously being expanded and improved. Around 600 BC, in Heuneburg, Germany, forts were constructed with a limestone foundation supported by a mudbrick wall approximately 4 metres tall, probably topped by a roofed walkway, thus reaching a total height of 6 metres. The wall was clad with lime plaster, regularly renewed. Towers protruded outwards from it.
The Oppidum of Manching (German: Oppidum von Manching) was a large Celtic proto-urban or city-like settlement at modern-day Manching (near Ingolstadt), Bavaria (Germany). The settlement was founded in the 3rd century BC and existed until c. 50–30 BC. It reached its largest extent during the late La Tène period (late 2nd century BC), when it had a size of 380 hectares. At that time, 5,000 to 10,000 people lived within its 7.2 km long walls. The oppidum of Bibracte is another example of a Gaulish fortified settlement.
Bronze and Iron Age Near East: The term casemate wall is used in the archaeology of Israel and the wider Near East, having the meaning of a double wall protecting a city or fortress, with transverse walls separating the space between the walls into chambers. These could be used as such, for storage or residential purposes, or could be filled with soil and rocks during siege in order to raise the resistance of the outer wall against battering rams. Originally thought to have been introduced to the region by the Hittites, this has been disproved by the discovery of examples predating their arrival, the earliest being at Ti'inik (Taanach) where such a wall has been dated to the 16th century BC. Casemate walls became a common type of fortification in the Southern Levant between the Middle Bronze Age (MB) and Iron Age II, being more numerous during the Iron Age and peaking in Iron Age II (10th–6th century BC). However, the construction of casemate walls had begun to be replaced by sturdier solid walls by the 9th century BC, probably due the development of more effective battering rams by the Neo-Assyrian Empire. Casemate walls could surround an entire settlement, but most only protected part of it. The three different types included freestanding casemate walls, then integrated ones where the inner wall was part of the outer buildings of the settlement, and finally filled casemate walls, where the rooms between the walls were filled with soil right away, allowing for a quick, but nevertheless stable construction of particularly high walls.
Ancient Rome: The Romans fortified their cities with massive, mortar-bound stone walls. The most famous of these are the largely extant Aurelian Walls of Rome and the Theodosian Walls of Constantinople, together with partial remains elsewhere. These are mostly city gates, like the Porta Nigra in Trier or Newport Arch in Lincoln.
Hadrian's Wall was built by the Roman Empire across the width of what is now northern England following a visit by Roman Emperor Hadrian (AD 76–138) in AD 122.
Indian subcontinent: A number of forts dating from the Later Stone Age to the British Raj are found in the mainland Indian subcontinent (modern day India, Pakistan, Bangladesh and Nepal). "Fort" is the word used in India for all old fortifications. Numerous Indus Valley Civilization sites exhibit evidence of fortifications. By about 3500 BC, hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dykes and defensive walls, for neighbouring communities bickered constantly about the control of prime agricultural land. The fortification varies by site. While Dholavira has stone-built fortification walls, Harrapa is fortified using baked bricks; sites such as Kalibangan exhibit mudbrick fortifications with bastions and Lothal has a quadrangular fortified layout. Evidence also suggested of fortifications in Mohenjo-daro. Even a small town—for instance, Kotada Bhadli, exhibiting sophisticated fortification-like bastions—shows that nearly all major and minor towns of the Indus Valley Civilization were fortified. |
mil_tactics_continued_pretraining.csv | Fortification | Numerous Indus Valley Civilization sites exhibit evidence of fortifications. By about 3500 BC, hundreds of small farming villages dotted the Indus floodplain. Many of these settlements had fortifications and planned streets. The stone and mud brick houses of Kot Diji were clustered behind massive stone flood dykes and defensive walls, for neighbouring communities bickered constantly about the control of prime agricultural land. The fortification varies by site. While Dholavira has stone-built fortification walls, Harrapa is fortified using baked bricks; sites such as Kalibangan exhibit mudbrick fortifications with bastions and Lothal has a quadrangular fortified layout. Evidence also suggested of fortifications in Mohenjo-daro. Even a small town—for instance, Kotada Bhadli, exhibiting sophisticated fortification-like bastions—shows that nearly all major and minor towns of the Indus Valley Civilization were fortified. Forts also appeared in urban cities of the Gangetic valley during the second urbanisation period between 600 and 200 BC, and as many as 15 fortification sites have been identified by archaeologists throughout the Gangetic valley, such as Kaushambi, Mahasthangarh, Pataliputra, Mathura, Ahichchhatra, Rajgir, and Lauria Nandangarh. The earliest Mauryan period brick fortification occurs in one of the stupa mounds of Lauria Nandangarh, which is 1.6 km in perimeter and oval in plan and encloses a habitation area.Mundigak (c. 2500 BC) in present-day south-east Afghanistan has defensive walls and square bastions of sun dried bricks.
India currently has over 180 forts, with the state of Maharashtra alone having over 70 forts, which are also known as durg, many of them built by Shivaji, founder of the Maratha Empire.
A large majority of forts in India are in North India. The most notable forts are the Red Fort at Old Delhi, the Red Fort at Agra, the Chittor Fort and Mehrangarh Fort in Rajasthan, the Ranthambhor Fort, Amer Fort and Jaisalmer Fort also in Rajasthan and Gwalior Fort in Madhya Pradesh.
The Arthashastra the Indian treatise on military strategy describes six major types of forts differentiated by their major modes of defenses.
Sri Lanka: Forts in Sri Lanka date back thousands of years, with many being built by Sri Lankan kings. These include several walled cities. With the outset of colonial rule in the Indian Ocean, Sri Lanka was occupied by several major colonial empires that from time to time became the dominant power in the Indian Ocean. The colonists built several western-style forts, mostly in and around the coast of the island. The first to build colonial forts in Sri Lanka were the Portuguese; these forts were captured and later expanded by the Dutch. The British occupied these Dutch forts during the Napoleonic wars. Most of the colonial forts were garrisoned up until the early 20th century. The coastal forts had coastal artillery manned by the Ceylon Garrison Artillery during the two world wars.
Most of these were abandoned by the military but retained civil administrative officers, while others retained military garrisons, which were more administrative than operational. Some were reoccupied by military units with the escalation of the Sri Lankan Civil War; Jaffna fort, for example, came under siege several times.
China: Large tempered earth (i.e. rammed earth) walls were built in ancient China since the Shang dynasty (c. 1600–1050 BC); the capital at ancient Ao had enormous walls built in this fashion (see siege for more info). Although stone walls were built in China during the Warring States (481–221 BC), mass conversion to stone architecture did not begin in earnest until the Tang dynasty (618–907 AD). The Great Wall of China had been built since the Qin dynasty (221–207 BC), although its present form was mostly an engineering feat and remodelling of the Ming dynasty (1368–1644 AD).
In addition to the Great Wall, a number of Chinese cities also employed the use of defensive walls to defend their cities. Notable Chinese city walls include the city walls of Hangzhou, Nanjing, the Old City of Shanghai, Suzhou, Xi'an and the walled villages of Hong Kong. The famous walls of the Forbidden City in Beijing were established in the early 15th century by the Yongle Emperor. The Forbidden City made up the inner portion of the Beijing city fortifications.
Philippines:
Spanish colonial fortifications: During the Spanish Era several forts and outposts were built throughout the archipelago. Most notable is Intramuros, the old walled city of Manila located along the southern bank of the Pasig River. The historic city was home to centuries-old churches, schools, convents, government buildings and residences, the best collection of Spanish colonial architecture before much of it was destroyed by the bombs of World War II. Of all the buildings within the 67-acre city, only one building, the San Agustin Church, survived the war.
Partial listing of Spanish forts:
Intramuros, Manila
Cuartel de Santo Domingo, Santa Rosa, Laguna
Fuerza de Cuyo, Cuyo, Palawan
Fuerza de Cagayancillo, Cagayancillo, Palawan
Real Fuerza de Nuestra Señora del Pilar de Zaragoza, Zamboanga City
Fuerza de San Felipe, Cavite City
Fuerza de San Pedro, Cebu
Fuerte dela Concepcion y del Triunfo, Ozamiz, Misamis Occidental
Fuerza de San Antonio Abad, Manila
Fuerza de Pikit, Pikit, Cotabato
Fuerza de Santiago, Romblon, Romblon
Fuerza de Jolo, Jolo, Sulu
Fuerza de Masbate, Masbate
Fuerza de Bongabong, Bongabong, Oriental Mindoro
Cotta de Dapitan, Dapitan, Zamboanga del Norte
Fuerte de Alfonso XII, Tukuran, Zamboanga del Sur
Fuerza de Bacolod, Bacolod, Lanao del Norte
Guinsiliban Watchtower, Guinsiliban, Camiguin
Laguindingan Watchtower, Laguindingan, Misamis Oriental
Kutang San Diego, Gumaca, Quezon
Baluarte Luna, Luna, La Union
Local fortifications: The Ivatan people of the northern islands of Batanes built their so-called idjang on hills and elevated areas to protect themselves during times of war. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.
The Igorots built forts made of stone walls that averaged several meters in width and about two to three times the width in height around 2000 BC.
The Muslim Filipinos of the south built strong fortresses called kota or moong to protect their communities. Usually, many of the occupants of these kotas are entire families rather than just warriors. Lords often had their own kotas to assert their right to rule, it served not only as a military installation but as a palace for the local Lord. It is said that at the height of the Maguindanao Sultanate's power, they blanketed the areas around Western Mindanao with kotas and other fortifications to block the Spanish advance into the region. These kotas were usually made of stone and bamboo or other light materials and surrounded by trench networks. As a result, some of these kotas were burned easily or destroyed. With further Spanish campaigns in the region, the sultanate was subdued and a majority of kotas dismantled or destroyed. kotas were not only used by the Muslims as defense against Spaniards and other foreigners, renegades and rebels also built fortifications in defiance of other chiefs in the area. During the American occupation, rebels built strongholds and the datus, rajahs, or sultans often built and reinforced their kotas in a desperate bid to maintain rule over their subjects and their land. Many of these forts were also destroyed by American expeditions, as a result, very very few kotas still stand to this day.
Notable kotas:
Kota Selurong: an outpost of the Bruneian Empire in Luzon, later became the City of Manila.
Kuta Wato/Kota Bato: Literally translates to "stone fort" the first known stone fortification in the country, its ruins exist as the "Kutawato Cave Complex"
Kota Sug/Jolo: The capital and seat of the Sultanate of Sulu. When it was occupied by the Spaniards in the 1870s they converted the kota into the world's smallest walled city.
Pre-Islamic Arabia:
During Muhammad's lifetime: During Muhammad's era in Arabia, many tribes made use of fortifications. In the Battle of the Trench, the largely outnumbered defenders of Medina, mainly Muslims led by Islamic prophet Muhammad, dug a trench, which together with Medina's natural fortifications, rendered the confederate cavalry (consisting of horses and camels) useless, locking the two sides in a stalemate. |
mil_tactics_continued_pretraining.csv | Fortification | Kuta Wato/Kota Bato: Literally translates to "stone fort" the first known stone fortification in the country, its ruins exist as the "Kutawato Cave Complex"
Kota Sug/Jolo: The capital and seat of the Sultanate of Sulu. When it was occupied by the Spaniards in the 1870s they converted the kota into the world's smallest walled city.
Pre-Islamic Arabia:
During Muhammad's lifetime: During Muhammad's era in Arabia, many tribes made use of fortifications. In the Battle of the Trench, the largely outnumbered defenders of Medina, mainly Muslims led by Islamic prophet Muhammad, dug a trench, which together with Medina's natural fortifications, rendered the confederate cavalry (consisting of horses and camels) useless, locking the two sides in a stalemate. Hoping to make several attacks at once, the confederates persuaded the Medina-allied Banu Qurayza to attack the city from the south. However, Muhammad's diplomacy derailed the negotiations, and broke up the confederacy against him. The well-organized defenders, the sinking of confederate morale, and poor weather conditions caused the siege to end in a fiasco.
During the Siege of Ta'if in January 630, Muhammad ordered his followers to attack enemies who fled from the Battle of Hunayn and sought refuge in the fortress of Taif.
Islamic world:
Africa: The entire city of Kerma in Nubia (present day Sudan) was encompassed by fortified walls surrounded by a ditch. Archaeology has revealed various Bronze Age bastions and foundations constructed of stone together with either baked or unfired brick.
The walls of Benin are described as the world's second longest man-made structure, as well as the most extensive earthwork in the world, by the Guinness Book of Records, 1974. The walls may have been constructed between the thirteenth and mid-fifteenth century CE or, during the first millennium CE. Strong citadels were also built other in areas of Africa. Yorubaland for example had several sites surrounded by the full range of earthworks and ramparts seen elsewhere, and sited on ground. This improved defensive potential—such as hills and ridges. Yoruba fortifications were often protected with a double wall of trenches and ramparts, and in the Congo forests concealed ditches and paths, along with the main works, often bristled with rows of sharpened stakes. Inner defenses were laid out to blunt an enemy penetration with a maze of defensive walls allowing for entrapment and crossfire on opposing forces.
A military tactic of the Ashanti was to create powerful log stockades at key points. This was employed in later wars against the British to block British advances. Some of these fortifications were over a hundred yards long, with heavy parallel tree trunks. They were impervious to destruction by artillery fire. Behind these stockades, numerous Ashanti soldiers were mobilized to check enemy movement. While formidable in construction, many of these strongpoints failed because Ashanti guns, gunpowder and bullets were poor, and provided little sustained killing power in defense. Time and time again British troops overcame or bypassed the stockades by mounting old-fashioned bayonet charges, after laying down some covering fire.
Defensive works were of importance in the tropical African Kingdoms. In the Kingdom of Kongo field fortifications were characterized by trenches and low earthen embankments. Such strongpoints ironically, sometimes held up much better against European cannon than taller, more imposing structures.
Medieval Europe: Roman forts and hill forts were the main antecedents of castles in Europe, which emerged in the 9th century in the Carolingian Empire. The Early Middle Ages saw the creation of some towns built around castles. These cities were only rarely protected by simple stone walls and more usually by a combination of both walls and ditches. From the 12th century, hundreds of settlements of all sizes were founded all across Europe, which very often obtained the right of fortification soon afterward.
The founding of urban centres was an important means of territorial expansion and many cities, especially in eastern Europe, were founded precisely for this purpose during the period of Eastern Colonisation. These cities are easy to recognise due to their regular layout and large market spaces. The fortifications of these settlements were continuously improved to reflect the current level of military development. During the Renaissance era, the Venetian Republic raised great walls around cities, and the finest examples, among others, are in Nicosia (Cyprus), Rocca di Manerba del Garda (Lombardy), and Palmanova (Italy), or Dubrovnik (Croatia), which proved to be futile against attacks but still stand to this day. Unlike the Venetians, the Ottomans used to build smaller fortifications but in greater numbers, and only rarely fortified entire settlements such as Počitelj, Vratnik, and Jajce in Bosnia.
Development after introduction of firearms: Medieval-style fortifications were largely made obsolete by the arrival of cannons on the 14th century battlefield. Fortifications in the age of black powder evolved into much lower structures with greater use of ditches and earth ramparts that would absorb and disperse the energy of cannon fire. Walls exposed to direct cannon fire were very vulnerable, so were sunk into ditches fronted by earth slopes.
This placed a heavy emphasis on the geometry of the fortification to allow defensive cannonry interlocking fields of fire to cover all approaches to the lower and thus more vulnerable walls.
The evolution of this new style of fortification can be seen in transitional forts such as Sarzanello in North West Italy which was built between 1492 and 1502. Sarzanello consists of both crenellated walls with towers typical of the medieval period but also has a ravelin like angular gun platform screening one of the curtain walls which is protected from flanking fire from the towers of the main part of the fort. Another example is the fortifications of Rhodes which were frozen in 1522 so that Rhodes is the only European walled town that still shows the transition between the classical medieval fortification and the modern ones. A manual about the construction of fortification was published by Giovanni Battista Zanchi in 1554.
Fortifications also extended in depth, with protected batteries for defensive cannonry, to allow them to engage attacking cannons to keep them at a distance and prevent them from bearing directly on the vulnerable walls.
The result was star shaped fortifications with tier upon tier of hornworks and bastions, of which Fort Bourtange is an excellent example. There are also extensive fortifications from this era in the Nordic states and in Britain, the fortifications of Berwick-upon-Tweed and the harbour archipelago of Suomenlinna at Helsinki being fine examples.
19th century: The arrival of explosive shells in the 19th century led to yet another stage in the evolution of fortification. Star forts did not fare well against the effects of high explosives and the intricate arrangements of bastions, flanking batteries and the carefully constructed lines of fire for the defending cannon could be rapidly disrupted by explosive shells.
Worse, the large open ditches surrounding forts of this type were an integral part of the defensive scheme, as was the covered way at the edge of the counter scarp. The ditch was extremely vulnerable to bombardment with explosive shells.
In response, military engineers evolved the polygonal style of fortification. The ditch became deep and vertically sided, cut directly into the native rock or soil, laid out as a series of straight lines creating the central fortified area that gives this style of fortification its name.
Wide enough to be an impassable barrier for attacking troops, but narrow enough to be a difficult target for enemy shellfire, the ditch was swept by fire from defensive blockhouses set in the ditch as well as firing positions cut into the outer face of the ditch itself.
The profile of the fort became very low indeed, surrounded outside the ditch covered by caponiers by a gently sloping open area so as to eliminate possible cover for enemy forces, while the fort itself provided a minimal target for enemy fire. The entrypoint became a sunken gatehouse in the inner face of the ditch, reached by a curving ramp that gave access to the gate via a rolling bridge that could be withdrawn into the gatehouse.
Much of the fort moved underground. Deep passages and tunnel networks now connected the blockhouses and firing points in the ditch to the fort proper, with magazines and machine rooms deep under the surface. The guns, however, were often mounted in open emplacements and protected only by a parapet; both in order to keep a lower profile and also because experience with guns in closed casemates had seen them put out of action by rubble as their own casemates were collapsed around them.
Gone were citadels surrounding towns: forts were to be moved to the outside of the cities some 12 km to keep the enemy at a distance so their artillery could not bombard the city center. From now on a ring of forts were to be built at a spacing that would allow them to effectively cover the intervals between them.
The new forts abandoned the principle of the bastion, which had also been made obsolete by advances in arms. The outline was a much-simplified polygon, surrounded by a ditch. These forts, built in masonry and shaped stone, were designed to shelter their garrison against bombardment. |
mil_tactics_continued_pretraining.csv | Fortification | The guns, however, were often mounted in open emplacements and protected only by a parapet; both in order to keep a lower profile and also because experience with guns in closed casemates had seen them put out of action by rubble as their own casemates were collapsed around them.
Gone were citadels surrounding towns: forts were to be moved to the outside of the cities some 12 km to keep the enemy at a distance so their artillery could not bombard the city center. From now on a ring of forts were to be built at a spacing that would allow them to effectively cover the intervals between them.
The new forts abandoned the principle of the bastion, which had also been made obsolete by advances in arms. The outline was a much-simplified polygon, surrounded by a ditch. These forts, built in masonry and shaped stone, were designed to shelter their garrison against bombardment. One organizing feature of the new system involved the construction of two defensive curtains: an outer line of forts, backed by an inner ring or line at critical points of terrain or junctions (see, for example, Séré de Rivières system in France).
Traditional fortification however continued to be applied by European armies engaged in warfare in colonies established in Africa against lightly armed attackers from amongst the indigenous population. A relatively small number of defenders in a fort impervious to primitive weaponry could hold out against high odds, the only constraint being the supply of ammunition.
20th and 21st centuries: Steel-and-concrete fortifications were common during the 19th and early 20th centuries. However, the advances in modern warfare since World War I have made large-scale fortifications obsolete in most situations. In the 1930s and 1940s, some fortifications were built with designs taking into consideration the new threat of aerial warfare, such as Fort Campbell in Malta. Despite this, only underground bunkers are still able to provide some protection in modern wars. Many historical fortifications were demolished during the modern age, but a considerable number survive as popular tourist destinations and prominent local landmarks today.
The downfall of permanent fortifications had two causes:
The ever-escalating power, speed, and reach of artillery and airpower meant that almost any target that could be located could be destroyed if sufficient force were massed against it. As such, the more resources a defender devoted to reinforcing a fortification, the more combat power that fortification justified being devoted to destroying it, if the fortification's destruction was demanded by an attacker's strategy. From World War II, bunker busters were used against fortifications. By 1950, nuclear weapons were capable of destroying entire cities and producing dangerous radiation. This led to the creation of civilian nuclear air raid shelters.
The second weakness of permanent fortification was its very permanency. Because of this, it was often easier to go around a fortification and, with the rise of mobile warfare in the beginning of World War II, this became a viable offensive choice. When a defensive line was too extensive to be entirely bypassed, massive offensive might could be massed against one part of the line allowing a breakthrough, after which the rest of the line could be bypassed. Such was the fate of the many defensive lines built before and during World War II, such as the Siegfried Line, the Stalin Line, and the Atlantic Wall. This was not the case with the Maginot Line; it was designed to force the Germans to invade other countries (Belgium or Switzerland) to go around it, and was successful in that sense.
Instead field fortification rose to dominate defensive action. Unlike the trench warfare which dominated World War I, these defences were more temporary in nature. This was an advantage because since it was less extensive it formed a less obvious target for enemy force to be directed against.
If sufficient power were massed against one point to penetrate it, the forces based there could be withdrawn and the line could be re-established relatively quickly. Instead of a supposedly impenetrable defensive line, such fortifications emphasized defence in depth, so that as defenders were forced to pull back or were overrun, the lines of defenders behind them could take over the defence.
Because the mobile offensives practised by both sides usually focused on avoiding the strongest points of a defensive line, these defences were usually relatively thin and spread along the length of a line. The defence was usually not equally strong throughout, however.
The strength of the defensive line in an area varied according to how rapidly an attacking force could progress in the terrain that was being defended—both the terrain the defensive line was built on and the ground behind it that an attacker might hope to break out into. This was both for reasons of the strategic value of the ground, and its defensive value.
This was possible because while offensive tactics were focused on mobility, so were defensive tactics. The dug-in defences consisted primarily of infantry and antitank guns. Defending tanks and tank destroyers would be concentrated in mobile brigades behind the defensive line. If a major offensive was launched against a point in the line, mobile reinforcements would be sent to reinforce that part of the line that was in danger of failing.
Thus the defensive line could be relatively thin because the bulk of the fighting power of the defenders was not concentrated in the line itself but rather in the mobile reserves. A notable exception to this rule was seen in the defensive lines at the Battle of Kursk during World War II, where German forces deliberately attacked the strongest part of the Soviet defences, seeking to crush them utterly.
The terrain that was being defended was of primary importance because open terrain that tanks could move over quickly made possible rapid advances into the defenders' rear areas that were very dangerous to the defenders. Thus such terrain had to be defended at all costs.
In addition, since in theory the defensive line only had to hold out long enough for mobile reserves to reinforce it, terrain that did not permit rapid advance could be held more weakly because the enemy's advance into it would be slower, giving the defenders more time to reinforce that point in the line. For example, the Battle of the Hurtgen Forest in Germany during the closing stages of World War II is an excellent example of how difficult terrain could be used to the defenders' advantage.
After World War II, intercontinental ballistic missiles capable of reaching much of the way around the world were developed, so speed became an essential characteristic of the strongest militaries and defenses. Missile silos were developed, so missiles could be fired from the middle of a country and hit cities and targets in another country, and airplanes (and air carriers) became major defenses and offensive weapons (leading to an expansion of the use of airports and airstrips as fortifications). Mobile defenses could be had underwater, too, in the form of nuclear submarines capable of firing missiles. Some bunkers in the mid to late 20th century came to be buried deep inside mountains and prominent rocks, such as Gibraltar and the Cheyenne Mountain Complex. On the ground itself, minefields have been used as hidden defences in modern warfare, often remaining long after the wars that produced them have ended.
Demilitarized zones along borders are arguably another type of fortification, although a passive kind, providing a buffer between potentially hostile militaries.
Military airfields: Military airfields offer a fixed "target rich" environment for even relatively small enemy forces, using hit-and-run tactics by ground forces, stand-off attacks (mortars and rockets), air attacks, or ballistic missiles. Key targets—aircraft, munitions, fuel, and vital technical personnel—can be protected by fortifications.
Aircraft can be protected by revetments, hesco barriers, hardened aircraft shelters and underground hangars which will protect from many types of attack. Larger aircraft types tend to be based outside the operational theatre.
Munition storage follows safety rules which use fortifications (bunkers and bunds) to provide protection against accident and chain reactions (sympathetic detonations). Weapons for rearming aircraft can be stored in small fortified expense stores closer to the aircraft. At Bien Hoa South Vietnam on the morning of 16 May 1965, as aircraft were being re-fuelled and armed, a chain reaction explosion destroyed 13 aircraft, killed 34 personnel, and injured over 100; this, along with damage and losses of aircraft to enemy attack (by both infiltration and stand-off attacks), led to the construction of revetments and shelters to protect aircraft throughout South Vietnam.
Aircrew and ground personnel will need protection during enemy attacks and fortifications range from culvert section "duck and cover" shelters to permanent air-raid shelters. Soft locations with high personnel densities such as accommodation and messing facilities can have limited protection by placing prefabricated concrete walls or barriers around them, examples of barriers are Jersey Barriers, T Barriers or Splinter Protection Units (SPUs). Older fortification may prove useful such as the old 'Yugo' pyramid shelters built in the 1980s which were used by US personnel on 8 Jan 2020 when Iran fired 11 ballistic missiles at Ayn al-Asad Airbase in Iraq.
Fuel is volatile and has to comply with rules for storage which provide protection against accidents. Fuel in underground bulk fuel installations is well protected though valves and controls are vulnerable to enemy action. Above-ground tanks can be susceptible to attack.
Ground support equipment will need to be protected by fortifications to be usable after an enemy attack.
Permanent (concrete) guard fortifications are safer, stronger, last longer and are more cost-effective than sandbag fortifications. Prefabricated positions can be made from concrete culvert sections. The British Yarnold Bunker is made from sections of a concrete pipe. |
mil_tactics_continued_pretraining.csv | Fortification | Older fortification may prove useful such as the old 'Yugo' pyramid shelters built in the 1980s which were used by US personnel on 8 Jan 2020 when Iran fired 11 ballistic missiles at Ayn al-Asad Airbase in Iraq.
Fuel is volatile and has to comply with rules for storage which provide protection against accidents. Fuel in underground bulk fuel installations is well protected though valves and controls are vulnerable to enemy action. Above-ground tanks can be susceptible to attack.
Ground support equipment will need to be protected by fortifications to be usable after an enemy attack.
Permanent (concrete) guard fortifications are safer, stronger, last longer and are more cost-effective than sandbag fortifications. Prefabricated positions can be made from concrete culvert sections. The British Yarnold Bunker is made from sections of a concrete pipe.
Guard towers provide an increased field of view but a lower level of protection.
Dispersal and camouflage of assets can supplement fortifications against some forms of airfield attack.
Counter-insurgency: Just as in colonial periods, comparatively obsolete fortifications are still used for low-intensity conflicts. Such fortifications range in size from small patrol bases or forward operating bases up to huge airbases such as Camp Bastion/Leatherneck in Afghanistan. Much like in the 18th and 19th century, because the enemy is not a powerful military force with the heavy weaponry required to destroy fortifications, walls of gabion, sandbag or even simple mud can provide protection against small arms and anti-tank weapons—although such fortifications are still vulnerable to mortar and artillery fire.
Forts: Forts in modern American usage often refer to space set aside by governments for a permanent military facility; these often do not have any actual fortifications, and can have specializations (military barracks, administration, medical facilities, or intelligence).
However, there are some modern fortifications that are referred to as forts. These are typically small semi-permanent fortifications. In urban combat, they are built by upgrading existing structures such as houses or public buildings. In field warfare they are often log, sandbag or gabion type construction.
Such forts are typically only used in low-level conflicts, such as counterinsurgency conflicts or very low-level conventional conflicts, such as the Indonesia–Malaysia confrontation, which saw the use of log forts for use by forward platoons and companies. The reason for this is that static above-ground forts cannot survive modern direct or indirect fire weapons larger than mortars, RPGs and small arms.
Prisons and others: Fortifications designed to keep the inhabitants of a facility in rather than attacker out can also be found, in prisons, concentration camps, and other such facilities. Those are covered in other articles, as most prisons and concentration camps are not primarily military forts (although forts, camps, and garrison towns have been used as prisons and/or concentration camps; such as Theresienstadt, Guantanamo Bay detention camp and the Tower of London for example).
Field fortifications:
Notes:
References: This article incorporates text from a publication now in the public domain: Jackson, Louis Charles (1911). "Fortification and Siegecraft". In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 10 (11th ed.). Cambridge University Press. pp. 679–725.
Bibliography: July, Robert Pre-Colonial Africa, Charles Scribner, 1975.
Murray, Nicholas. "The Development of Fortifications", The Encyclopedia of War, Gordon Martel (ed.). WileyBlackwell, 2011.
Murray, Nicholas. The Rocky Road to the Great War: The Evolution of Trench Warfare to 1914. Potomac Books Inc. (an imprint of the University of Nebraska Press), 2013.
Osadolor, Osarhieme Benson, "The Military System of Benin Kingdom 1440–1897", (UD), Hamburg University: 2001 copy.
Thornton, John Kelly Warfare in Atlantic Africa, 1500–1800, Routledge: 1999, ISBN 1857283937.
External links:
Fortress Study Group
Military Architecture at the Wayback Machine (archived 5 December 2018)
ICOFORT |
mil_tactics_continued_pretraining.csv | Forward air control | Early air ground support efforts: As close air support began during World War I, there were pioneer attempts to direct the trench strafing by the ground troops marking their positions by laying out signal panels on the ground, firing flares, or lighting smoke signals. Aircrews had difficulty communicating with the ground troops; they would drop messages or use messenger pigeons. Benno Fiala von Fernbrugg, an Austro-Hungarian pilot, pioneered the use of radio for fire control; at the Battle of Gorlice he used a radio transmitter in his airplane to send changes via morse code to an artillery battery on the ground. Colonel Billy Mitchell also equipped his Spad XVI command airplane with a radio, and the Germans experimented with radios in their Junkers J.I all-metal-structure, armored-fuselage sesquiplanes.
The Marines in the so-called Banana wars of the 1920s and 1930s used Curtiss Falcons and Vought Corsairs that were equipped with radios powered by airstream-driven generators, with a range of up to 50 miles. Another method of communication was for the pilot to drop messages in a weighted container, and to swoop in and pick up messages hung out by ground troops on a "clothesline" between poles. The objective was aerial reconnaissance and air attack. Using these various methods, the Marine pilots combined the functions of both FAC and strike aircraft, as they carried out their own air attacks on the Sandinistas in Nicaragua in 1927. The commonality of pilots and ground troops belonging to the same service led to a close air support role similar to that sought by use of FACs, without the actual use of a FAC. On 27 October 1927, a Marine patrol used cloth panels to direct an air strike—arguably the first forward air control mission. This distinctive U.S. Marine doctrine of interaction between Marine infantry and aviation would persist, recurring in the Korean War and the Vietnam War.
French colonial operations in the Rif War from 1920–1926 used air power similarly to the Marines in Nicaragua against the Sandinistas but in a different environment, the desert. The French Mobile Groups of combined arms not only used aircraft for scouting and air attack; the airplanes carried trained artillery officers as observers. These aerial observers called in artillery fire via radio.
The German military noted close air support operations in the Spanish Civil War and decided to develop its forward air control capability. By 1939, they had forward air control teams called Ground Attack Teams attached to every headquarters from regiment level upwards. These Teams directed air strikes flown by Luftwaffe close air support units. Extensive coordinated training by air and ground troops had raised this system to state of the art by the beginning of World War II.
When the United States Army Air Forces (USAAF) was founded on 20 June 1941, it included provisions for Air Ground Control Parties to serve with the United States Army at the division, corps, and Army headquarters. The Air Ground Control Parties functions were to regulate bombing and artillery in close conjunction with the ground troops, as well as assess bomb damage. They were thus the first of similar units to try to fulfill the functions of the FAC without being airborne. However, these units were often plagued by turf wars and cumbersome communications between the respective armies and air forces involved. As a result, it could take hours for an air strike requested by ground troops to actually show up.
World War II: However, forward air control during World War II came into existence as a result of exigency, and was used in several theaters of World War II. Its reincarnation in action was a result of field expedience rather than planned operations.
On the Allied side, British forces in the North Africa campaign began using the Forward Air Support Links, a "tentacle" system that used radio links from front line units to the rear. Air force teams were co-located with the army command. Close air support would be requested by forward units and if approved delivered from "cab ranks" of fighter-bombers held near the front lines. The requesting unit would direct the air strikes. The U.S. Army would not copy the British system until the Allied invasion of Italy, but adapted it for use there and in France after the Invasion of Normandy of 6 June 1944.
In the Pacific Theater, 4 Squadron of the Royal Australian Air Force began forward air control at the Battle of Buna-Gona, New Guinea in November 1942. The RAAF continued forward air control in the Pacific for the rest of the war. By November 1943, the U.S. Marines were using forward air control during the Battle of Bougainville.
The United States would end World War II still without an air control doctrine. When the U.S. Air Force split from the U.S. Army in 1947, neither took on the responsibility for forward air control; the U.S. military thus had no functional forward air control when the Korean War broke out.
Post World War II:
British Commonwealth operations: The United Kingdom and Commonwealth continued to build on their experience in the Second World War in various campaigns around the world in the second half of the twentieth century, including the Malayan Emergency, the Suez Crisis, the Indonesian Confrontation and operations in Aden and Oman. With the re-formation of the Army Air Corps in 1957 this new corps's functions included airborne forward air control.
Korean War: Although the United States, as part of the United Nations Command (UNC) in the Korean War, entered the war on 26 June 1950 with no forward air controllers, it rapidly improvised close air support procedures for UNC forces. By 20 July, jury-rigged systems were not only controlling air strikes against the communist foe, but also occasionally directing aerial interceptions of opposing aircraft. Both the U.S. high command and North Korean General Nam Il agreed that only tactical air power saved United Nation forces from defeat during the mobile warfare stage of the war.
When the front lines bogged down into static trench warfare in Summer 1951, forward air control diminished in importance. To cope with the communist switch to night operations, both radar and Shoran bombing techniques were developed. However, close air support continued, and sometimes used to direct interdiction missions against the communist lines of communications. By this time, Allied air forces were contributing a considerable portion of the tactical air strikes.
By the cessation of hostilities, airborne forward air controllers alone were credited with flying 40,354 forward air control sorties, and directing air strikes that killed an estimated 184,808 communist troops. At times, tactical air was credited with inflicting about half of all communist casualties.
Despite having agreed on a common forward air control doctrine as embodied in Field Manual 31 - 35 Air-Ground Operations, a turf war over doctrine raged between the U.S. Air Force and the U.S. Army for the entire war. Additionally, the U.S. Marine Corps maintained its own FAC operation during the war. Also, U.S. Navy carrier aviation would not completely coordinate its operations with the Air Force/Army system until the final month of the war. With no common doctrine agreed upon during the war, forward air control systems were shut down postwar in 1956.
Vietnam War: Forward air controllers played a major part in the largest bombing campaign in history during the Vietnam War. While World War II had featured indiscriminate mass air raids on major cities worldwide, bombing during the Vietnam War was aimed at smaller targets in a country the size of New Mexico. Unless bombs were dropped in a free fire zone, or on a pre-briefed target, the bombing in Vietnam was directed by FACs. Also unlike World War II, serious efforts were made to avoid hitting the civilian populace, which also called for FAC intervention.
Reinvention of forward air control: In 1961, when forward air control was revived, it promptly ran into the recurring problems of unreliable radios, a shortage of supplies, lack of suitable aircraft, differing concepts of close air support, and unfavorable terrain.
The first manning requirement for FACs, levied in 1962, amounted to 32 slots in Vietnam. Even as the slots slowly filled, the requirement proved inadequate. The 19th Tactical Air Support Squadron was then assigned in-country in mid-1963 to augment the FAC force. By January 1965, there were still only 144 USAF FACs in Southeast Asia. While the U.S. Air Force would continue to add more FACs, projecting a need for 831 FACs, and stationing four more Tactical Air Support Squadrons in Southeast Asia by April 1965, the manning levels of assigned FACs would run about 70% of need until December 1969. Other branches of the U.S. military also had FACs; the U.S. Army had at least two aviation companies of FACs, the U.S. Marine Corps had an organic FAC squadron within its forces, and the U.S. Navy established its own FAC squadron in the Mekong Delta. U.S. involvement had begun with a South Vietnamese FAC training program; later in the war, Laotians and Hmong were also trained as FACs.
Technological developments: There was a great deal of technical innovation in forward air control operations during the course of the Vietnam War. The United States came up with a number of ways to make its forward air control system more effective. |
mil_tactics_continued_pretraining.csv | Forward air control | Other branches of the U.S. military also had FACs; the U.S. Army had at least two aviation companies of FACs, the U.S. Marine Corps had an organic FAC squadron within its forces, and the U.S. Navy established its own FAC squadron in the Mekong Delta. U.S. involvement had begun with a South Vietnamese FAC training program; later in the war, Laotians and Hmong were also trained as FACs.
Technological developments: There was a great deal of technical innovation in forward air control operations during the course of the Vietnam War. The United States came up with a number of ways to make its forward air control system more effective. As early as 1962, Douglas C-47 flareship FACs began the forward air control mission in South Vietnam, mostly on night missions. In September 1965, another C-47 went into action as the first Airborne Command and Control Center. As additional ABCCC aircraft were added, they would constantly govern the air war in Southeast Asia.
By early 1966, a rising level of communist anti-aircraft fire against propeller-driven FAC aircraft necessitated the use of jet aircraft for FACs in high-risk areas in North Vietnam. The Fast FAC mission would supplement the FAC mission in Southeast Asia until war's end.
In July 1966, night FAC operations began against the Ho Chi Minh Trail; A-26 Invaders began a dual FAC/strike mission under call sign "Nimrod". The U.S. Air Force began Operation Shed Light as a test of night time battlefield illumination. In response to increasing pressure from air strikes, the communists turned entirely to night operations in Vietnam by 1968. C-123 Provider cargo aircraft were used as flareships to light up the Trail and direct air strikes, under the call sign "Candlestick", until late 1969. Withdrawn in the face of mounting opposition, the flareships would still serve elsewhere in theater until 30 June 1971. In a similar role, Lockheed AC-130 gunships, call sign "Blindbat", not only lit the Trail and directed air strikes, but used its own copious firepower on enemy trucks. The gunships carried both electronic sensors tied into Operation Igloo White and night observation devices for spotting enemy trucks, as well as a computerized fire control system.
On 1 November 1968, President Lyndon Johnson declared a halt to the bombing of North Vietnam. With that act, the focus of the contending forces became the Ho Chi Minh Trail. As the U.S. more than quadrupled the number of airstrikes aimed at interdiction, North Vietnamese anti-aircraft guns and gunners transferred south to the Trail to match this new onslaught. Both sides realized that the supply of military necessities being moved south to insurgents would be crucial to a communist victory. At about this time, the Raven FACs began supporting Vang Pao's Central Intelligence Agency-supported guerrilla army on the Plain of Jars in northern Laos with air strikes serving as aerial artillery blasting the way clear for offensive sweeps by the partisans.
In early 1970, in an attempt to improve bombing accuracy, the USAF began using laser guided ordnance.
Results: By May 1971, U.S. Air Force intelligence concluded that air strikes had wiped out all the North Vietnamese trucks on the Ho Chi Minh Trail. This was a demonstrably untrue conclusion, as trucks still traversed the Trail until the communist takeover in 1975. After war's end, the U.S. Air Force ended the forward air control mission, just as they had following World War II and Korea.
Indo-Pakistani War: Major Atma Singh, of the Indian Army, flying a HAL Krishak, played a crucial part in a close air support defense against steep odds. The Pakistani loss of armor in December 1971 was one of the most severe since the great armored clashes of World War II. Major Singh won the Maha Vir Chakra for his performance under heavy ground fire.
Portuguese Overseas War: During the Portuguese Overseas War, the Portuguese Air Force used mainly Dornier Do 27 and OGMA/Auster D.5 light aircraft in the forward air control role, in the several theatres of operation: Angola, Portuguese Guinea and Mozambique.
Rhodesia: During the Rhodesian Bush War the Rhodesian Air Force mounted Airborne FACs in Aermacchi AL60 B Trojans and Lynx aircraft.
South Africa: South Africa deployed both Airborne FACs (in AM.3CM Bosboks) and ground-based FACs during the Border War including the Battle of Cassinga. During the Force Intervention Brigade operations in the Democratic Republic of the Congo, an FAC called 27 missions.
Present day doctrines:
NATO: For NATO forces the qualifications and experience required to be a FAC are set out in a NATO Standard (STANAG). FACs may form part of a Fire Support Team or Tactical Air Control Party, they may be ground based, airborne FACs in fixed-wing aircraft (FAC-A) or in helicopters (ABFAC). Since 2003 the United States Armed Forces have used the term joint terminal attack controller (JTAC) for some of their ground based FACs.
NATO is making efforts to increase the safety and reduce the risk of fratricide in air to ground operations. Co-operation between different NATO agencies such as the NATO Standardization Agency and the JAPCC resulted in the development of common standards for Forward Air Controllers and these are now set out in STANAG 3797 (Minimum Qualifications for Forward Air Controllers). NATO FACs are trained to request, plan, brief and execute CAS operations both for Low Level and Medium/High Level operations and their training NATO FACs includes electronic warfare, suppression of enemy air defences, enemy air defence, air command and control, attack methods and tactics, weaponeering and Joint Air Attack Team Tactics.
United Kingdom armed forces: FACs in the United Kingdom are trained at the Joint Forward Air Controller Training and Standards Unit (JFACTSU) where controllers are drawn from all three services: Naval Service (Royal Marines and Royal Marines Reserve), the Army, and the RAF (RAF Regiment). UK FACs operate as TACPs or form part of Royal Artillery Fire Support Teams which direct artillery as well as close air support. The Army Air Corps provides Airborne Forward Air Controllers.
United States Marine Corps: When deployed on operations each USMC infantry company is allocated a FAC or JTAC. Such assignment (designated as a "B-Billet") is given to Marine aviators often as they are most knowledgeable about close air support and air superiority doctrines.
Afghanistan National Army: The Afghan National Army (ANA) relied on coalition partners to raise and sustain its FAC and Joint Fires Officer (JFO) capability. The ANA capability, known as the Afghan Tactical Air Coordinator maintained a skill equivalency to that of a JFO. Australian JFOs pioneered this capability within the ANA.
See also: Air naval gunfire liaison company
Artillery observer
Fire Support Team
Forward Air Control Development Unit RAAF
Joint terminal attack controller
Tactical Air Control Party
United States Air Force Combat Control Team
Notes:
References: Chant, Christopher (2002). Austro-Hungarian aces of World War 1 Christopher Chant. Osprey Publishing, 2002. ISBN 1-84176-376-4, ISBN 978-1-84176-376-7.
Churchill, Jan (1997). Hit My Smoke!: Forward Air Controllers in Southeast Asia. Sunflower University Press. ISBNs 0-89745-215-1, 978-0-89745-215-1.
Cossey, Bob (2009). Upward and Onward: Life of Air Vice-Marshal John Howe CB, CBE, AFC. Pen and Sword. ISBNs 1-84415-820-9, 978-1-84415-820-1.
Dorr, Robert F., and Warren Thompson (2003). Korean Air War. Robert F. Dorr, Warren Thompson. Zenith Imprint, 2003. ISBNs 0-7603-1511-6, 978-0-7603-1511-8.
Dunnigan, James F. and Albert A. Nofi (2000). Dirty Little Secrets of the Vietnam War: Military Information You're Not Supposed to Know. Macmillan. ISBNs 0-312-25282-X, 9780312252823.
Futrell, Robert F. (1961).The United States Air Force in Korea 1950-1953. Air Force History and Museums Program year 2000 reprint of original Duel, Sloan and Pearce edition. ISBNs 0160488796, 978-0160488795.
Gooderson, Ian (1998). Air Power at the Battlefront: Allied Close Air Support in Europe 1943-45 (Studies in Air Power). Routledge. ISBNs: 0714642118, 978-0714642116.
Hallion, Richard (1989).Strike from the Sky: the History of Battlefield Air Attack, 1911-1945. Smithsonian Institution Press. ISBNs 0-87474-452-0, 978-0-87474-452-1.
Hooper, Jim (2009). |
mil_tactics_continued_pretraining.csv | Forward air control | Futrell, Robert F. (1961).The United States Air Force in Korea 1950-1953. Air Force History and Museums Program year 2000 reprint of original Duel, Sloan and Pearce edition. ISBNs 0160488796, 978-0160488795.
Gooderson, Ian (1998). Air Power at the Battlefront: Allied Close Air Support in Europe 1943-45 (Studies in Air Power). Routledge. ISBNs: 0714642118, 978-0714642116.
Hallion, Richard (1989).Strike from the Sky: the History of Battlefield Air Attack, 1911-1945. Smithsonian Institution Press. ISBNs 0-87474-452-0, 978-0-87474-452-1.
Hooper, Jim (2009). A Hundred Feet Over Hell: Flying With the Men of the 220th Recon Airplane Company Over I Corps and the DMZ, Vietnam 1968-1969. Jim Hooper. Zenith Imprint. ISBNs 0-7603-3633-4, 978-0-7603-3633-5.
Lester, Gary Robert (1987). Mosquitoes to Wolves: The Evolution of the Airborne Forward Air Controller. Air University Press. ISBNs 1-58566-033-7, 978-1-58566-033-9.
Nalty, Bernard C. (2005). War Against Trucks: Aerial Interdiction in Southern Laos 1968- 1972. Air Force History and Museums Program, United States Air Force. ISBN 9781477550076.
Norval, Morgan (1990). Death in the Desert: The Namibian Tragedy. Selous Foundation Press. ISBNs: 0944273033, 978-0944273036.
Schlight, John (2003). Help from Above: Air Force Close Air Support of the Army 1946-1973. Air Force History and Museums Program. ISBNs 178039442X, 978-1780394428.
Shepperd, Don (2002). Misty, First Person Stories of the F-100 Misty Fast FAC in the Vietnam War. 1st Books Library. ISBN 0-7596-5254-6.
Stringer, Kevin Douglas and John Adams Wickham (2006). Military Organizations for Homeland Defense and Smaller-scale Contingencies: A Comparative Approach. Greenwood Publishing Group. ISBNs 0275993086, 9780275993085.
External links: Joint Publication 3-09.3 Joint Tactics, Techniques, and Procedures for Close Air Support (CAS)
Michael Amrine (August 1951). "He Runs An Air Force For Gravel Crunchers". Popular Science. Bonnier Corporation. p. 92. |
mil_tactics_continued_pretraining.csv | Forward operating base | Description: In its most basic form, a forward operating base consists of a ring of barbed wire around a position with a fortified entry control point, or ECP. An ECP is a controlled entry and exit point of the FOB and typically has positions to protect personnel against personnel-borne improvised explosive devices (PBIED) and vehicle-borne improvised explosive devices (VBIED), plus blast mitigation with standoff protection.
More advanced FOBs include an assembly of berms, concrete barriers, gates, guard towers, pillboxes and bunkers and other force protection infrastructure. They are often built from Hesco bastions.
Bases in Iraq:
Bases in Afghanistan:
FOBs in the United States:
Other reported Coalition installations in Afghanistan 2001–2016:
See also: Advance airfield
Advanced Landing Ground
Fire support base
Forward Operating Site
Loss of Strength Gradient
Main Operating Base
Naval outlying landing field
Satellite airfield
List of established military terms
References:
== External links == |
mil_tactics_continued_pretraining.csv | Fourth-generation warfare | Elements: Fourth-generation warfare is defined as conflicts which involve the following elements:
Complex and long term
Terrorism (tactic)
A non-national or transnational base – highly decentralized
A direct attack on the enemy's culture, including genocidal acts against civilians.
All available pressures are used – political, economic, social and military
Occurs in low-intensity conflict, involving actors from all networks
Non-combatants are tactical dilemmas
Lack of hierarchy
Small in size, spread out network of communication and financial support
Use of insurgency tactics as subversion, terrorism and guerrilla tactics
Decentralised forces
History: The concept was first described by the authors William S. Lind, Colonel Keith Nightengale (US Army), Captain John F. Schmitt (USMC), Colonel Joseph W. Sutton (US Army), and Lieutenant Colonel Gary I. Wilson (USMCR) in a 1989 Marine Corps Gazette article titled "The Changing Face of War: Into the Fourth Generation". In 2006, the concept was expanded upon by USMC Colonel Thomas X. Hammes (Ret.) in his book, The Sling and The Stone.
The generations of warfare described by these authors are:
First generation: tactics of line and column; which developed in the age of the smoothbore musket. Lind describes First Generation of warfare as beginning after the Peace of Westphalia in 1648 ending the Thirty Years' War and establishing the state's need to organize and conduct war. 1GW consisted of tightly ordered soldiers with top-down discipline. These troops would fight in close order and advance slowly. This began to change as the battlefield changed. Old line and column tactics are now considered suicidal as the bow and arrow/sword morphed into the rifle and machine gun.
Second generation: tactics of linear fire and movement, with reliance on indirect fire. This type of warfare can be seen in the early stages of World War I where there was still strict adherence to drill and discipline of formation and uniform. However, there remained a dependence on artillery and firepower to break the stalemate and move towards a pitched battle.
Third generation: tactics of infiltration to bypass and collapse the enemy's combat forces rather than seeking to close with and destroy them; and defence in depth. The 3GW military seeks to bypass the enemy, and attack his rear forward, such as the tactics used by German Stormtroopers in World War I against the British and French in order to break the trench warfare stalemate (Lind 2004). These aspects of 3GW bleed into 4GW as it is also warfare of speed and initiative. However, it targets both military forces and home populations.
The use of fourth-generation warfare can be traced to the Cold War period, as superpowers and major powers attempted to retain their grip on colonies and captured territories. Unable to withstand direct combat against bombers, tanks, and machine guns, non-state entities used tactics of education/propaganda, movement-building, secrecy, terror, and/or confusion to overcome the technological gap.
Fourth-generation warfare has often involved an insurgent group or other violent non-state actor trying to implement their own government or reestablish an old government over the current ruling power. However, a non-state entity tends to be more successful when it does not attempt, at least in the short term, to impose its own rule, but tries simply to disorganize and delegitimize the state in which the warfare takes place. The aim is to force the state adversary to expend manpower and money in an attempt to establish order, ideally in such a highhanded way that it merely increases disorder, until the state surrenders or withdraws.
Fourth-generation warfare is often seen in conflicts involving failed states and civil wars, particularly in conflicts involving non-state actors, intractable ethnic or religious issues, or gross conventional military disparities. Many of these conflicts occur in the geographic area described by author Thomas P.M. Barnett as the Non-Integrating Gap, fought by countries from the globalised Functioning Core.
Fourth-generation warfare has much in common with traditional low-intensity conflict in its classical forms of insurgency and guerrilla war. As in those small wars, the conflict is initiated by the "weaker" party through actions which can be termed "offensive". The difference lies in the manner in which 4GW opponents adapt those traditional concepts to present day conditions. These conditions are shaped by technology, globalization, religious fundamentalism, and a shift in moral and ethical norms which brings legitimacy to certain issues previously considered restrictions on the conduct of war. This amalgamation and metamorphosis produces novel ways of war for both the entity on the offensive and that on the defensive.
Characteristics: Fourth-generation warfare is normally characterized by a violent non-state actor (VNSA) fighting a state. This fighting can be physically done, such as by modern examples Hezbollah or the Liberation Tigers of Tamil Eelam (LTTE). In this realm, the VNSA uses all three levels of fourth generation warfare. These are the physical (actual combat; it is considered the least important), mental (the will to fight, belief in victory, etc.,) and moral (the most important, this includes cultural norms, etc.) levels.
A 4GW enemy has the following characteristics: lack of hierarchical authority, lack of formal structure, patience and flexibility, ability to keep a low profile when needed, and small size. A 4GW adversary might use the tactics of an insurgent, terrorist, or guerrilla in order to wage war against a nation's infrastructure. Fourth generation warfare takes place on all fronts: economical, political, the media, military, and civilian. Conventional military forces often have to adapt tactics to fight a 4GW enemy.
Resistance can also be below the physical level of violence. This is via non-violent means, such as Mahatma Gandhi's opposition to the British Empire or the marches led by Martin Luther King Jr. Both desired their factions to deescalate the conflict while the state escalates against them, the objective being to target the opponent on the moral and mental levels rather than the physical level. The state is then seen as oppressive and loses support.
Another characteristic of fourth-generation warfare is that unlike in third generation warfare, the VNSA's forces are decentralized. With fourth generation warfare, there may even be no single organization and that smaller groups organize into impromptu alliances to target a bigger threat (that being the state armed forces or another faction). As a result, these alliances are weak and if the state's military leadership is smart enough they can split their enemy and cause them to fight amongst themselves.
Fourth-generation warfare goals:
Survival.
To convince the enemy's political decision makers that their goals are either unachievable or too costly for the perceived benefit.
Yet, another factor is that political centers of gravity have changed. These centers of gravity may revolve around nationalism, religion, or family or clan honor.
Disaggregated forces, such as guerrillas, terrorists, and rioters, which lack a center of gravity, deny to their enemies a focal point at which to deliver a conflict ending blow. As a result, strategy becomes more problematic while combating a VNSA.
It has been theorized that a state vs. state conflict in fourth-generation warfare would involve the use of computer hackers and international law to obtain the weaker side's objectives, the logic being that the civilians of the stronger state would lose the will to fight as a result of seeing their state engage in alleged atrocities and having their own bank accounts harmed.
Three principal attributes of the new-age terrorism were held to be their hybrid structure (as opposed to the traditional microscopic command and control pattern), importance given to systemic disruption vis-a-vis target destruction, and sophisticated use of technological advancements (including social media and mobile communications technology). A terrorist network could be designed to be either acephalous (headless like Al-Qaeda after Bin Laden) or polycephalous (hydra-headed like Kashmiri separatists). Social media networks supporting the terrorists are characterized by positive feedback loops, tight coupling and non-linear response propagation (viz. a small perturbation causing a large disproportionate response).
Criticism: Fourth-generation warfare theory has been criticized on the grounds that it is "nothing more than repackaging of the traditional clash between the non-state insurgent and the soldiers of a nation-state."
Strategic Studies Institute writer and United States Army War College professor Antulio J. Echevarria II, in his article Fourth-Generation War and Other Myths, argues what is being called fourth generation warfare are simply insurgencies. He also claims that 4GW was "reinvented" by Lind to create the appearance of having predicted the future. Echevarria writes: "The generational model is an ineffective way to depict changes in warfare. Simple displacement rarely takes place, significant developments typically occur in parallel." The critique was rebutted by John Sayen, a military historian and retired Lt. Col. in the Marine Corps Reserve.
Lieutenant General Kenneth F. McKenzie Jr., USMC, characterizes fourth-generation warfare theory as "elegant irrelevance" and states that "its methods are unclear, its facts contentious and open to widely varying interpretations, and its relevance questionable."
Rod Thornton argues that Thomas Hammes and William S. Lind are "providing an analytical lens through which to view the type of opposition that exists now 'out there' and to highlight the shortcomings of the current US military in dealing with that opposition." Instead of fourth generation warfare being an explanation for a new way of warfare, it allows the blending of different generations of warfare with the exception that fourth generation also encompasses new technology. Fourth generation warfare theorists such as Lind and Hammes wish to make the point that it "is not just that the military's structure and equipment are ill-suited to the 4GW problem, but so is its psyche".
See also:
== References == |
mil_tactics_continued_pretraining.csv | Frogman | Scope of operations: Tactical diving is a branch of professional diving carried out by armed forces and tactical units. They may be divided into:
Combat or assault divers.
Special mission work divers (called "clearance divers" in the British Royal Navy and Royal Australian Navy), who do general work underwater.
Work divers who are trained in defusing mines and removing other explosives underwater.
These groups may overlap, and the same men may serve as assault divers and work divers, such as the Australian Clearance Diving Branch (RAN).
The range of operations performed by these operatives includes:
Amphibious assault: stealthy deployment of land or boarding forces. The vast majority of combat swimmer missions are simply to get "from here to there" and arrive suitably equipped and in sufficient physical condition to fight on arrival. The deployment of tactical forces by water to assault land targets, oil platforms, or surface ship targets (as in boardings for seizure of evidence) is a major driver behind the equipping and training of combat swimmers. The purposes are many, but include feint and deception, counter-drug, law enforcement, counter-terrorism, and counter-proliferation missions.
Sabotage: This includes putting limpet mines on ships.
Clandestine surveying: Surveying a beach before a troop landing, or other forms of unauthorized underwater surveying in denied waters.
Clandestine underwater work, e.g.:
Recovering underwater objects.
Clandestine fitting of monitoring devices on submarine communications cables in enemy waters.
Investigating unidentified divers, or a sonar echo that may be unidentified divers. Police diving work may be included.
Checking ships, boats, structures, and harbors for limpet mines and other sabotage; and ordinary routine maintenance in war conditions.
Underwater mine clearance and bomb disposal.
Typically, a diver with closed circuit oxygen rebreathing equipment will stay within a depth limit of 20 feet (6.1 m) with limited deeper excursions to a maximum of 50 feet (15 m) because of the risk of seizure due to acute oxygen toxicity. The use of nitrox or mixed gas rebreathers can extend this depth range considerably, but this may be beyond the scope of operations, depending on the unit.
Mission descriptions: US and UK forces use these official definitions for mission descriptors:
Stealthy
keeping out of sight (e.g., underwater) when approaching the target.
Covert
carrying out an action of which the enemy may become aware, but whose perpetrator cannot easily be discovered or apprehended. Covert action often involves military force which cannot be hidden once it has happened. Stealth on approach, and frequently on departure, may be used.
Clandestine
it is intended that the enemy does not find out then or afterwards that the action has happened – for example, installing eavesdropping devices. Approach, installing the devices, and departure are all to be kept from the knowledge of the enemy. If the operation or its purpose is exposed, then the actor will usually make sure that the action at least remains "covert", or unattributable.
Defending against frogmen: Anti-frogman techniques are security methods developed to protect watercraft, ports and installations, and other sensitive resources both in or nearby vulnerable waterways from potential threats or intrusions by frogmen.
Equipment: Frogmen on clandestine operations use rebreathers, as the bubbles released by open-circuit scuba would reveal them to surface lookouts and make a noise which hydrophones could easily detect.
Origins of the name: A few different explanations have been given for the origin of the term frogman.
Paul Boyton adopted the stage name The Fearless Frogman. In the 1870s, he was a long distance swimmer who wore a rubber immersion suit, with hood.
In an interview with historian Erick Simmel, John Spence claimed that the name "frogman" was coined while he was training in a green waterproof suit, "Someone saw me surfacing one day and yelled out, 'Hey, frogman!' The name stuck for all of us."
History: In ancient Roman and Greek times, there were instances of men swimming or diving for combat, sometimes using a hollow plant stem or a long bone as a snorkel. Diving with snorkel is mentioned by Aristotle (4th century BC). The earliest descriptions of frogmen in war are found in Thucydides' History of the Peloponnesian War. The first instance was in 425 BC, when the Athenian fleet besieged the Spartans on the small island of Sphacteria. The Spartans managed to get supplies from the mainland by underwater swimmers towing submerged sacks of supplies. In another incident of the same war, in 415 BC, the Athenians used combat divers in the port of Syracuse, Sicily. The Syracuseans had planted vertical wooden poles in the bottom around their port, to prevent the Athenian triremes from entering. The poles were submerged, not visible above the sea level. The Athenians used various means to cut these obstacles, including divers with saws. It is believed that the underwater sawing required snorkels for breathing and diving weights to keep the divers stable.
Also, in the writings of Al-Maqrizi, it is also claimed that the naval forces of the Fatimid Caliphate, in an engagement with Byzantine forces off the coast of Messina henceforth referred to as the Battle of the Straits, employed a novel strategy with strong similarities to modern-day frogmen tactics. In the writings of Heinz Halm, who studied and translated the writings of Al-Maqrizi and other contemporary Islamic historians, it is described: "They would dive from their own ship and swim over to the enemy ship; they would fasten ropes to its rudder, along which earthenware pots containing Greek fire were then made to slide over to the enemy ship, and shattered on the sternpost." Apparently, this tactic succeeded in destroying many Byzantine vessels, and the battle ended in a major Fatimid victory; according to the Arab historians, a thousand prisoners were taken, including the Byzantine admiral, Niketas, with many of his officers, as well as a heavy Indian sword which bore an inscription indicating that it had once belonged to Muhammad.
The Hungarian Chronicon Pictum claims that Henry III's 1052 invasion of Hungary was defeated by a skillful diver who sabotaged Henry's supply fleet. The unexpected sinking of the ships is confirmed by German chronicles.
On 4 November 1918, during World War I, Italian frogmen sunk the Austro-Hungarian ship Viribus Unitis.
Italy started World War II with a commando frogman force already trained. Britain, Germany, the United States, and the Soviet Union started commando frogman forces during World War II.
First frogmen: The word frogman appeared first in the stage name The Fearless Frogman of Paul Boyton, who since the 1870s broke records in long distance swimming to demonstrate a newly invented rubber immersion suit, with an inflated hood.
The first modern frogmen were the World War II Italian commando frogmen of Decima Flottiglia MAS (now "ComSubIn": Comando Raggruppamento Subacquei e Incursori Teseo Tesei) which formed in 1938 and was first in action in 1940. Originally these divers were called "Uomini Gamma" because they were members of the top secret special unit called "Gruppo Gamma", which originated from the kind of Pirelli rubber skin-suit nicknamed muta gamma used by these divers. Later they were nicknamed "Uomini Rana," Italian for "frog men", because of an underwater swimming frog kick style, similar to that of frogs, or because their fins looked like frog's feet.
This special corps used an early oxygen rebreather scuba set, the Auto Respiratore ad Ossigeno (A.R.O), a development of the Dräger oxygen self-contained breathing apparatus designed for the mining industry and of the Davis Submerged Escape Apparatus made by Siebe, Gorman & Co and by Bergomi, designed for escaping from sunken submarines. This was used from about 1920 for spearfishing by Italian sport divers, modified and adapted by the Italian navy engineers for safe underwater use and built by Pirelli and SALVAS from about 1933, and so became a precursor of the modern diving rebreather.
For this new way of underwater diving, the Italian frogmen trained in La Spezia, Liguria, using the newly available Genoese free diving spearfishing equipment; diving mask, snorkel, swimfins, and rubber dry suit, the first specially made diving watch (the luminescent Panerai), and the new A.R.O. scuba unit. This was a revolutionary alternative way to dive, and the start of the transition from the usual heavy underwater diving equipment of the hard hat divers which had been in general use since the 18th century, to self-contained divers, free of being tethered by an air line and rope connection.
Wartime operations: After Italy declared war, the Decima Flottiglia MAS (Xª MAS) attempted several frogmen attacks on British naval bases in the Mediterranean between June 1940 and July 1941, but none were successful, because of equipment failure or early detection by British forces. On September 10, 1941, eight Xª MAS frogmen were inserted by submarine close to the British harbour at Gibraltar, where using human torpedoes to penetrate the defences, sank three merchant ships before escaping through neutral Spain. |
mil_tactics_continued_pretraining.csv | Frogman | scuba unit. This was a revolutionary alternative way to dive, and the start of the transition from the usual heavy underwater diving equipment of the hard hat divers which had been in general use since the 18th century, to self-contained divers, free of being tethered by an air line and rope connection.
Wartime operations: After Italy declared war, the Decima Flottiglia MAS (Xª MAS) attempted several frogmen attacks on British naval bases in the Mediterranean between June 1940 and July 1941, but none were successful, because of equipment failure or early detection by British forces. On September 10, 1941, eight Xª MAS frogmen were inserted by submarine close to the British harbour at Gibraltar, where using human torpedoes to penetrate the defences, sank three merchant ships before escaping through neutral Spain. An even more successful attack, the Raid on Alexandria, was mounted on 19 December on the harbour at Alexandria, again using human torpedoes. The raid resulted in disabling the battleships HMS Queen Elizabeth and HMS Valiant together with a destroyer and an oil tanker, but all six frogmen were captured. Frogmen were deployed by stealth in Algeciras, Spain, from where they launched a number of limpet-mine attacks on Allied shipping at anchor off Gibraltar. Some time later they refitted the interned Italian tanker Olterra as a mothership for human torpedoes, carrying out three assaults on ships at Gibraltar between late 1942 and early 1943, sinking six of them.
Nazi Germany raised a number of frogmen units under the auspices of both the Kriegsmarine and the Abwehr, often relying on Italian expertise and equipment. In June 1944, a K-Verband frogman unit failed to destroy the bridge at Bénouville, now known as Pegasus Bridge, during the Battle of Normandy. In March 1945, a frogman squad from the Brandenburgers was deployed from their base in Venice to destroy the Ludendorff Bridge over the Rhine which had been captured by the US Army in the Battle of Remagen. Seven frogmen swam 17 kilometres (11 mi) downriver to the bridge carrying explosives, but were spotted by Canal Defence Lights. Four died, two from hypothermia, and the rest were captured.
The British Royal Navy had captured an Italian human torpedo during a failed attack on Malta; they developed a copy called the Chariot and formed a unit called the Experimental Submarine Flotilla, which later merged with the Special Boat Service. A number of Chariot operations were attempted, most notably Operation Title in October 1942, an attack on the German battleship Tirpitz, which had to be abandoned when a storm hit the fishing boat which was towing the Chariots into position. Operation Principal in January 1943 was an attack by eight Chariots on La Maddalena and Palermo harbours; although all the Chariots were lost, the new Italian cruiser Ulpio Traiano was sunk. The last and most successful British operation resulted in sinking two liners in Phuket harbour in Thailand in October 1944. Royal Navy divers did not use fins until December 1942.
Wartime developments: In 1933 Italian companies were already producing underwater oxygen rebreathers, but the first diving set known as SCUBA was invented in 1939 by Christian Lambertsen, who originally called it the Lambertsen Amphibious Respirator Unit (LARU) and patented it in 1940. He later renamed it the Self Contained Underwater Breathing Apparatus, which, contracted to SCUBA, eventually became the generic term for both open circuit and rebreather autonomous underwater breathing equipment.
Lambertsen demonstrated it to the Office of Strategic Services (OSS) (after already being rejected by the U.S. Navy) in a pool at a hotel in Washington D.C. OSS not only bought into the concept, they hired Lambertsen to lead the program and build up the dive element of their Maritime Unit. The OSS was the predecessor of the Central Intelligence Agency; the maritime element still exists inside the CIA's Special Activities Division.
John Spence, an enlisted member of the U.S. Navy, was the first man selected to join the OSS group.
Postwar operations: In April 1956, Commander Lionel Crabb, a wartime pioneer of Royal Navy combat diving, disappeared during a covert inspection of the hull of the Soviet Navy Sverdlov-class cruiser, Ordzhonikidze, while she was moored in Portsmouth Harbour.
The Shayetet 13 commandos of the Israeli Navy have carried out a number of underwater raids on harbors. They were initially trained by veterans of Xª MAS and used Italian equipment. As part of Operation Raviv in 1969, eight frogmen used two human torpedoes to enter Ras Sadat naval base near Suez, where they destroyed two motor torpedo boats with mines.
During the 1982 Falklands War, the Argentinian Naval Intelligence Service planned an attack on British warships at Gibraltar. Code named Operation Algeciras, three frogmen, recruited from a former anti-government insurgent group, were to plant mines on the ships' hulls. The operation was abandoned when the divers were arrested by Spanish police and deported.
In 1985, the French nuclear weapons tests at Moruroa in the Pacific Ocean was being contested by environmental protesters led by the Greenpeace campaign ship, Rainbow Warrior. The Action Division of the French Directorate-General for External Security devised a plan to sink the Rainbow Warrior while it was berthed in harbor at Auckland in New Zealand. Two divers from the Division posed as tourists and attached two limpet mines to the ship's hull; the resulting explosion sank the ship and killed a Netherlands citizen on board. Two agents from the team, but not the divers, were arrested by the New Zealand Police and later convicted of manslaughter. The French government finally admitted responsibility two months later.
In the U.S. Navy, frogmen were officially phased out in 1983 and all active duty frogmen were transferred to SEAL units. In 1989, during the U.S. invasion of Panama, a team of four U.S. Navy SEALs using rebreathers conducted a combat swimmer attack on the Presidente Porras, a gunboat and yacht belonging to Manuel Noriega. The commandos attached explosives to the vessel as it was tied to a pier in the Panama Canal, escaping only after being attacked with grenades. Three years later during Operation Restore Hope, members of SEAL Team One swam to shore in Somalia to measure beach composition, water depth, and shore gradient ahead of a Marine landing. The mission resulted in several of the SEALs becoming ill as Somalia's waters were contaminated with raw sewage.
In 1978, the U.S. Navy Special Operations Officer (1140) community was established by combining Explosive Ordnance Disposal (EOD) and Expendable Ordnance Management officers with Diving and Salvage officers. Special Ops Officers would become qualified in at lease two functional areas - normally EOD or Diving and Salvage, and Expendable Ordnance management. Officers trained in diving and salvage techniques were now allowed to follow a career pattern that took advantage of their training, and Unrestricted line officers were now permitted to specialize in salvage, with repeat tours of duty, and advanced training. Career patterns were developed to ensure that officers assigned to command were seasoned in salvage operations and well qualified in the technical aspects of their trade. "The combination gave a breadth and depth of professionalism to Navy salvage that had not been possible before."
Gallery:
See also: List of military diving units
Lionel Crabb – Royal Navy frogman and MI6 diver
Military diving
Underwater Demolition Team
References:
Further reading: Frogman operations: Decima Flottiglia MAS, Underwater Demolition Team, human torpedo, Sinking of the Rainbow Warrior, Russian commando frogmen
Bush, Elizabeth Kauffman (2004). America's first frogman : the Draper Kauffman story. Naval Institute Press. ISBN 1-59114-098-6. OCLC 55699399.
Fraser, Ian (1957). Frogman V.C. Angus & Robertson. OCLC 1599838.
Pugh, Marshal (1956). Frogman: Commander Crabb's story. OCLC 1280137.
Welham, Michael G.; Welham, Jacqui (1990). Frogman Spy: the mysterious disappearance of Commander 'Buster' Crabb. W.H. Allen. ISBN 1-85227-138-8. OCLC 21979335.
Tony Groom: DIVER. Royal Naval Clearance Divers work in the Falklands conflict. ISBN 978-1574092691.
External links: Media related to Frogman at Wikimedia Commons
Panerai during World War Two Archived 2019-04-14 at the Wayback Machine
Frogman - Training, Equipment, and Operations of Our Navy's Undersea Fighters - C.B. Colby Archived 2019-03-09 at the Wayback Machine
List of books about frogmen |
mil_tactics_continued_pretraining.csv | Full-spectrum dominance | US military doctrine: As early as April 2001 the United States Department of Defense defined "full-spectrum superiority" (FSS) as:
The cumulative effect of dominance in the air, land, maritime, and space domains and information environment, which includes cyberspace, that permits the conduct of joint operations without effective opposition or prohibitive interference.
The doctrine of Full Spectrum Operations replaced the prior one, which was known as AirLand Battle. AirLand Battle had been taught in one form or another since 1982.
The United States military's doctrine espoused a strategic intent to be capable of achieving FSS state in a conflict, either alone or with allies, by defeating any adversary and controlling any situation across the range of military operations.
The stated intent implies significant investment in a range of capabilities: dominant maneuver, precision engagement, focused logistics, and full-dimensional protection.
Criticism: Critics of US imperialism have referred to the term as proof of the ambitions of policymakers in the US and their alleged desire for total control. Harold Pinter referred to the term in his 2005 Nobel Prize in Literature acceptance speech Art, Truth and Politics:
I have said earlier that the United States is now totally frank about putting its cards on the table. That is the case. Its official declared policy is now defined as "full spectrum dominance". That is not my term, it is theirs. "Full spectrum dominance" means control of land, sea, air and space and all attendant resources.
Metaphorical use: Full spectrum dominance is used in a number of non-military fields to describe a comprehensive tactical effort to support a strategy.
In marketing, full spectrum dominance may refer to an integrated campaign that takes into account reaching an audience across a wide variety of platforms and media to guarantee visibility and reinforcement. That might include simultaneous integration of online promotions with direct marketing, public relations, social media and other tactical marketing vehicles.
See also: Geostrategy
Network-centric warfare
Psychological warfare
Overmatch
References:
Further reading:
Engdahl, F. William Full Spectrum Dominance: Totalitarian Democracy in the New World Order Boxborough, MA: 2009 Third Millennium Press. 268 pages.
Mahajan, Rahul Mahajan, Rahul (2003). Full Spectrum Dominance, U.S. power in Iraq and beyond. ISBN 9781583225783. New York: 2003 Seven Stories Press.
Vest, Jason "Missed Perceptions". Archived from the original on 15 May 2008. Retrieved 20 May 2007. Government Executive, 1 December 2005 |
mil_tactics_continued_pretraining.csv | Geneva Conventions | History: The Swiss businessman Henry Dunant went to visit wounded soldiers after the Battle of Solferino in 1859. He was shocked by the lack of facilities, personnel, and medical aid available to help these soldiers. As a result, he published his book, A Memory of Solferino, in 1862, on the horrors of war. His wartime experiences inspired Dunant to propose:
A permanent relief agency for humanitarian aid in times of war
A government treaty recognizing the neutrality of the agency and allowing it to provide aid in a war zone
The former proposal led to the establishment of the Red Cross in Geneva. The latter led to the 1864 Geneva Convention, the first codified international treaty that covered the sick and wounded soldiers on the battlefield. On 22 August 1864, the Swiss government invited the governments of all European countries, as well as the United States, Brazil, and Mexico, to attend an official diplomatic conference. Sixteen countries sent a total of twenty-six delegates to Geneva. On 22 August 1864, the conference adopted the first Geneva Convention "for the Amelioration of the Condition of the Wounded in Armies in the Field". Representatives of 12 states and kingdoms signed the convention:
For both of these accomplishments, Henry Dunant became co recipient of the first Nobel Peace Prize in 1901.
On 20 October 1868 the first unsuccessful attempt to expand the 1864 treaty was undertaken. With the 'Additional Articles relating to the Condition of the Wounded in War' an attempt was initiated to clarify some rules of the 1864 convention and to extend them to maritime warfare. The Articles were signed but were only ratified by the Netherlands and the United States of America. The Netherlands later withdrew their ratification. The protection of the victims of maritime warfare would later be realized by the third Hague Convention of 1899 and the tenth Hague Convention of 1907.
In 1906 thirty-five states attended a conference convened by the Swiss government. On 6 July 1906 it resulted in the adoption of the "Convention for the Amelioration of the Condition of the Wounded and Sick in Armies in the Field", which improved and supplemented, for the first time, the 1864 convention. It remained in force until 1970 when Costa Rica acceded to the 1949 Geneva Conventions.
The 1929 conference yielded two conventions that were signed on 27 July 1929. One, the "Convention for the Amelioration of the Condition of the Wounded and Sick in Armies in the Field", was the third version to replace the original convention of 1864. The other was adopted after experiences in World War I had shown the deficiencies in the protection of prisoners of war under the Hague Conventions of 1899 and 1907. The "Convention relative to the Treatment of Prisoners of War" was not to replace these earlier conventions signed at The Hague, rather it supplemented them.
There was considerable debate over whether the Geneva Convention should prohibit indiscriminate forms of warfare, such as aerial bombings, nuclear bombings and starvation, but no agreement was reached on those forms of violence.
Inspired by the wave of humanitarian and pacifistic enthusiasm following World War II and the outrage towards the war crimes disclosed by the Nuremberg and Tokyo trials, a series of conferences were held in 1949 reaffirming, expanding and updating the prior Geneva and Hague Conventions. It yielded four distinct conventions:
The First Geneva Convention "for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field" was the fourth update of the original 1864 convention and replaced the 1929 convention on the same subject matter.
The Second Geneva Convention "for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea" replaced the Hague Convention (X) of 1907. It was the first Geneva Convention on the protection of the victims of maritime warfare and mimicked the structure and provisions of the First Geneva Convention.
The Third Geneva Convention "relative to the Treatment of Prisoners of War" replaced the 1929 Geneva Convention that dealt with prisoners of war.
In addition to these three conventions, the conference also added a new elaborate Fourth Geneva Convention "relative to the Protection of Civilian Persons in Time of War". It was the first Geneva Convention not to deal with combatants, rather it had the protection of civilians as its subject matter. The 1899 and 1907 Hague Conventions had already contained some provisions on the protection of civilians and occupied territory. Article 154 specifically provides that the Fourth Geneva Convention is supplementary to these provisions in the Hague Conventions.
Despite the length of these documents, they were found over time to be incomplete. The nature of armed conflicts had changed with the beginning of the Cold War era, leading many to believe that the 1949 Geneva Conventions were addressing a largely extinct reality: on the one hand, most armed conflicts had become internal, or civil wars, while on the other, most wars had become increasingly asymmetric. Modern armed conflicts were inflicting an increasingly higher toll on civilians, which brought the need to provide civilian persons and objects with tangible protections in time of combat, bringing a much needed update to the Hague Conventions of 1899 and 1907.
In light of these developments, two Protocols were adopted in 1977 that extended the terms of the 1949 Conventions with additional protections. In 2005, a third brief Protocol was added establishing an additional protective sign for medical services, the Red Crystal, as an alternative to the ubiquitous Red Cross and Red Crescent emblems, for those countries that find them objectionable.
Commentaries: The Geneva Conventions of 12 August 1949. Commentary (The Commentaries) is a series of four volumes of books published between 1952 and 1958 and containing commentaries to each of the four Geneva Conventions. The series was edited by Jean Pictet who was the vice-president of the International Committee of the Red Cross. The Commentaries are often relied upon to provide authoritative interpretation of the articles.
Contents: The Geneva Conventions are rules that apply only in times of armed conflict and seek to protect people who are not or are no longer taking part in hostilities.
The first convention dealt with the treatment of wounded and sick armed forces in the field. The second convention dealt with the sick, wounded, and shipwrecked members of armed forces at sea. The third convention dealt with the treatment of prisoners of war during times of conflict. The fourth convention dealt with the treatment of civilians and their protection during wartime.
Individuals who fulfill the criteria of protected persons in international armed conflicts are protected by the 1949 conventions. Those not listed as protected persons in such conflicts are instead protected by international human rights law and general treaties concerning the legal status of aliens in belligerent nations.
Conventions: In international law and diplomacy the term convention refers to an international agreement, or treaty.
The First Geneva Convention "for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field" (first adopted in 1864, revised in 1906, 1929 and finally 1949);
The Second Geneva Convention "for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea" (first adopted in 1949, successor of the Hague Convention (X) 1907);
The Third Geneva Convention "relative to the Treatment of Prisoners of War" (first adopted in 1929, last revision in 1949);
The Fourth Geneva Convention "relative to the Protection of Civilian Persons in Time of War" (first adopted in 1949, based on parts of the Hague Convention (II) of 1899 and Hague Convention (IV) 1907).
With two Geneva Conventions revised and adopted, and the second and fourth added, in 1949 the whole set is referred to as the "Geneva Conventions of 1949" or simply the "Geneva Conventions". Usually only the Geneva Conventions of 1949 are referred to as First, Second, Third or Fourth Geneva Convention. The treaties of 1949 were ratified, in whole or with reservations, by 196 countries.
Protocols: The 1949 conventions have been modified with three amendment protocols:
Protocol I (1977) relating to the Protection of Victims of International Armed Conflicts
Protocol II (1977) relating to the Protection of Victims of Non-International Armed Conflicts
Protocol III (2005) relating to the Adoption of an Additional Distinctive Emblem
Application: The Geneva Conventions apply at times of war and armed conflict to governments who have ratified its terms. The details of applicability are spelled out in Common Articles 2 and 3.
Common Article 2 relating to international armed conflict (IAC): This article states that the Geneva Conventions apply to all the cases of international armed conflict (IAC), where at least one of the warring nations has ratified the Conventions. Primarily:
The Conventions apply to all cases of declared war between signatory nations. This is the original sense of applicability, which predates the 1949 version.
The Conventions apply to all cases of armed conflict between two or more signatory nations. This language was added in 1949 to accommodate situations that have all the characteristics of war without the existence of a formal declaration of war, such as a police action. |
mil_tactics_continued_pretraining.csv | Geneva Conventions | The details of applicability are spelled out in Common Articles 2 and 3.
Common Article 2 relating to international armed conflict (IAC): This article states that the Geneva Conventions apply to all the cases of international armed conflict (IAC), where at least one of the warring nations has ratified the Conventions. Primarily:
The Conventions apply to all cases of declared war between signatory nations. This is the original sense of applicability, which predates the 1949 version.
The Conventions apply to all cases of armed conflict between two or more signatory nations. This language was added in 1949 to accommodate situations that have all the characteristics of war without the existence of a formal declaration of war, such as a police action.
The Conventions apply to a signatory nation even if the opposing nation is not a signatory, but only if the opposing nation "accepts and applies the provisions" of the Conventions.
Article 1 of Protocol I further clarifies that armed conflict against colonial domination and foreign occupation also qualifies as an international conflict.
When the criteria of international armed conflict have been met, the full protections of the Conventions are considered to apply.
Common Article 3 relating to non-international armed conflict (NIAC): This article states that the certain minimum rules of war apply to armed conflicts "not of an international character." The International Committee of the Red Cross has explained that this language describes non-international armed conflict (NIAC) "where at least one Party is not a State." For example, it would apply to conflicts between state forces and non-state actors (NSAs), or between two NSAs, or to other conflicts that have all the characteristics of war, whether carried out within the confines of one country or not.
There are two criteria to distinguish non-international armed conflicts from lower forms of violence. The level of violence has to be of certain intensity, for example when the state cannot contain the situation with regular police forces. Also, involved non-state groups need to have a certain level of organization, like a military command structure.
The other Geneva Conventions are not applicable in this situation but only the provisions contained within Article 3, and additionally within the language of Protocol II. The rationale for the limitation is to avoid conflict with the rights of Sovereign States that were not part of the treaties. When the provisions of this article apply, it states that:
Persons taking no active part in the hostilities, including members of armed forces who have laid down their arms and those placed hors de combat by sickness, wounds, detention, or any other cause, shall in all circumstances be treated humanely, without any adverse distinction founded on race, colour, religion or faith, sex, birth or wealth, or any other similar criteria. To this end, the following acts are and shall remain prohibited at any time and in any place whatsoever with respect to the above-mentioned persons:
violence to life and person, in particular murder of all kinds, mutilation, cruel treatment and torture;
taking of hostages;
outrages upon dignity, in particular humiliating and degrading treatment; and
the passing of sentences and the carrying out of executions without previous judgment pronounced by a regularly constituted court, affording all the judicial guarantees which are recognized as indispensable by civilized peoples.
The wounded and sick shall be collected and cared for.
During the negotiation of the Geneva Conventions, France and Britain were initially staunchly opposed to Common Article 3. However, to save face during negotiations and make strategic concessions, France and Britain deliberately introduced ambiguous language in the text of Common Article 3 that made it easy for states to avoid the obligations of the rule. As a consequence, Common Article 3 only concerns with humane treatment and does not deal with methods and means of hostilities, such as bombings committed by non-state armed groups or state forces against civilian targets in the Algerian War and the Troubles.
On February 7, 2002, President Bush adopted the view that Common Article 3 did not protect al Qaeda prisoners because the United States-al Qaeda conflict was "not of an international character." The Supreme Court of the United States invalidated the Bush Administration view of Common Article 3, in Hamdan v. Rumsfeld, by ruling that Common Article Three of the Geneva Conventions applies to detainees in the "War on Terror", and that the Guantanamo military commission process used to try these suspects was in violation of U.S. and international law. In response to Hamdan, Congress passed the Military Commissions Act of 2006, which President Bush signed into law on October 17, 2006. Like the Military Commissions Act of 2006, its successor the Military Commissions Act of 2009 explicitly forbids the invocation of the Geneva Conventions "as a basis for a private right of action."
"... Common Article 3 continues the conventional practice (reflected in both the 'Lieber' and 'The Hague' provisions) of according humanitarian protections only to 'belligerents' who defer to the laws and customs of war: not to 'insurrectionists' who defy these norms from the very outset of hostilities. Observance of the rules of warfare is what elevates an 'insurrectionist' to the legally cognizable status of 'belligerent' under the 'International law of war'; nothing short of such an 'observance' suffices to effect this transformation from the infra legal to legal."
IAC and/or NIAC classification: Whether the conflict is an IAC or a NIAC or both depends on the nature and circumstances of the situation. Since there is a general prohibition against the use of force between States (as is reflected within article 2(4) of the United Nations Charter) with respect to Common Article 2, it is generally presumed that any use of such military force which is governed by international humanitarian law (IHL) is attributable to deliberate belligerent intent.
Regarding Common Article 3, the ICRC in its 2016 commentary stated the provision includes not just a conflict between territorial government forces and NSAs or NSAs themselves, but also a foreign military intervention against a NSA only if the territorial state consents to such intervention in its territory. Should the intervening country do so without the consent of the territorial state or in support of a NSA against that state, then Common Article 2 applies.
For example, the American-led intervention in the Syrian civil war became both an IAC with Syria and a NIAC with the Islamic State because the U.S. intervened in Syrian territory without the former's consent. On the other hand, Russia intervened in Syrian territory against the Free Syrian Army upon invitation by Syria, making Russia's participation subject only to Common Article 3 and therefore Protocol II (which Russia ratified on September 29, 1989).
The U.S.-led NATO invasion of Afghanistan from October 7 to December 17, 2001 was initially an IAC because it waged war against the Islamic Emirate of Afghanistan under Taliban rule. Once the new Karzai administration was established and recognized internationally, the conflict changed from an IAC to a NIAC, with NATO troops under International Security Assistance Force (ISAF) and Resolute Support Mission (RSM) auspices assisting the Islamic Republic of Afghanistan with its consent in battling Taliban insurgents. In contrast, the Soviet–Afghan War was an IAC because the Soviet Union invaded the Democratic Republic of Afghanistan (DRA) to remove Afghan communist leader Hafizullah Amin from power, then installed puppet leader Babrak Karmal, who "invited" Soviet troops to intervene against the Afghan mujahideen fighters.
While non-state armed groups are automatically presumed to engage in NIACs, they also can cross into the threshold of an IAC. The 2020 ICRC commentary on the Third Geneva Convention requires two elements for this classification: "the group must in fact fight on behalf of that Party" and "that Party must accept both the fighting role of the group and the fact that the fighting is done on its behalf." It further states that "[w]here a Party to a conflict has overall control over the militia, volunteer corps or organized resistance movement that has a fighting function and fights on the State's behalf, a relationship of belonging for the purposes of Article 4A(2) exists." For example, the Viet Cong was under effective control and direction by North Vietnam during the Vietnam War, therefore Common Article 2 solely applied to the conflict.
Enforcement:
Protecting powers: The term protecting power has a specific meaning under these Conventions. A protecting power is a state that is not taking part in the armed conflict, but that has agreed to look after the interests of a state that is a party to the conflict. The protecting power is a mediator enabling the flow of communication between the parties to the conflict. The protecting power also monitors the implementation of these Conventions, such as by visiting the zone of conflict and prisoners of war. The protecting power must act as an advocate for prisoners, the wounded, and civilians.
Grave breaches: Not all violations of the treaty are treated equally. The most serious crimes are termed grave breaches and provide a legal definition of a war crime. Grave breaches of the Third and Fourth Geneva Conventions include the following acts if committed against a person specifically protected by the conventions:
willful killing, torture or inhumane treatment, including biological experiments
willfully causing great suffering or serious injury to body or health
compelling a protected person to serve in the armed forces of a hostile power
willfully depriving a protected person of the right to a fair trial if accused of a war crime. |
mil_tactics_continued_pretraining.csv | Geneva Conventions | The protecting power is a mediator enabling the flow of communication between the parties to the conflict. The protecting power also monitors the implementation of these Conventions, such as by visiting the zone of conflict and prisoners of war. The protecting power must act as an advocate for prisoners, the wounded, and civilians.
Grave breaches: Not all violations of the treaty are treated equally. The most serious crimes are termed grave breaches and provide a legal definition of a war crime. Grave breaches of the Third and Fourth Geneva Conventions include the following acts if committed against a person specifically protected by the conventions:
willful killing, torture or inhumane treatment, including biological experiments
willfully causing great suffering or serious injury to body or health
compelling a protected person to serve in the armed forces of a hostile power
willfully depriving a protected person of the right to a fair trial if accused of a war crime.
Also considered grave breaches of the Fourth Geneva Convention are the following:
taking of hostages
extensive destruction and appropriation of property not justified by military necessity and carried out unlawfully and wantonly
unlawful deportation, transfer, or confinement.
Nations that are party to these treaties must enact and enforce legislation penalizing any of these crimes. Nations are also obligated to search for persons alleged to commit these crimes, or persons having ordered them to be committed, and to bring them to trial regardless of their nationality and regardless of the place where the crimes took place.
The principle of universal jurisdiction also applies to the enforcement of grave breaches when the United Nations Security Council asserts its authority and jurisdiction from the UN Charter to apply universal jurisdiction. The UNSC did this when they established the International Criminal Tribunal for Rwanda and the International Criminal Tribunal for the former Yugoslavia to investigate and/or prosecute alleged violations.
Right to a fair trial when no crime is alleged: Soldiers, as prisoners of war, will not receive a trial unless the allegation of a war crime has been made. According to article 43 of the 1949 Conventions, soldiers are employed for the purpose of serving in war; engaging in armed conflict is legitimate, and does not constitute a grave breach. Should a soldier be arrested by belligerent forces, they are to be considered "lawful combatants" and afforded the protectorate status of a prisoner of war (POW) until the cessation of the conflict. Human rights law applies to any incarcerated individual, including the right to a fair trial.
Charges may only be brought against an enemy POW after a fair trial, but the initial crime being accused must be an explicit violation of the accords, more severe than simply fighting against the captor in battle. No trial will otherwise be afforded to a captured soldier, as deemed by human rights law. This element of the convention has been confused during past incidents of detainment of US soldiers by North Vietnam, where the regime attempted to try all imprisoned soldiers in court for committing grave breaches, on the incorrect assumption that their sole existence as enemies of the state violated international law.
Legacy: Although warfare has changed dramatically since the Geneva Conventions of 1949, they are still considered the cornerstone of contemporary international humanitarian law. They protect combatants who find themselves hors de combat, and they protect civilians caught up in the zone of war. These treaties came into play for all recent non-international armed conflicts, including the War in Afghanistan, the Iraq War, the invasion of Chechnya (1994–2017), and the Russo-Georgian War. The Geneva Conventions also protect those affected by non-international armed conflicts such as the Syrian civil war.
The lines between combatants and civilians have blurred when the actors are not exclusively High Contracting Parties (HCP). Since the fall of the Soviet Union, an HCP often is faced with a non-state actor, as argued by General Wesley Clark in 2007. Examples of such conflict include the Sri Lankan Civil War, the Sudanese Civil War, and the Colombian Armed Conflict, as well as most military engagements of the US since 2000.
Some scholars hold that Common Article 3 deals with these situations, supplemented by Protocol II (1977). These set out minimum legal standards that must be followed for internal conflicts. International tribunals, particularly the International Criminal Tribunal for the former Yugoslavia (ICTY), have clarified international law in this area. In the 1999 Prosecutor v. Dusko Tadic judgement, the ICTY ruled that grave breaches apply not only to international conflicts, but also to internal armed conflict. Further, those provisions are considered customary international law.
Controversy has arisen over the US designation of irregular opponents as "unlawful enemy combatants" (see also unlawful combatant), especially in the SCOTUS judgments over the Guantanamo Bay detention camp brig facility Hamdi v. Rumsfeld, Hamdan v. Rumsfeld and Rasul v. Bush, and later Boumediene v. Bush. President George W. Bush, aided by Attorneys-General John Ashcroft and Alberto Gonzales and General Keith B. Alexander, claimed the power, as Commander in Chief of the Armed Forces, to determine that any person, including an American citizen, who is suspected of being a member, agent, or associate of Al Qaeda, the Taliban, or possibly any other terrorist organization, is an "enemy combatant" who can be detained in U.S. military custody until hostilities end, pursuant to the international law of war.
The application of the Geneva Conventions in the Russo-Ukrainian War (2014–present) has been troublesome because some of the personnel who engaged in combat against the Ukrainians were not identified by insignia, although they did wear military-style fatigues. The types of comportment qualified as acts of perfidy under jus in bello doctrine are listed in Articles 37 through 39 of the Geneva Convention; the prohibition of fake insignia is listed at Article 39.2, but the law is silent on the complete absence of insignia. The status of POWs captured in this circumstance remains a question.
Educational institutions and organizations including Harvard University, the International Committee of the Red Cross, and the Rohr Jewish Learning Institute use the Geneva Convention as a primary text investigating torture and warfare.
New challenges: Artificial intelligence and autonomous weapon systems, such as military robots and cyber-weapons, are creating challenges in the creation, interpretation and application of the laws of armed conflict. The complexity of these new challenges, as well as the speed in which they are developed, complicates the application of the Conventions, which have not been updated in a long time. Adding to this challenge is the very slow speed of the procedure of developing new treaties to deal with new forms of warfare, and determining agreed-upon interpretations to existing ones, meaning that by the time a decision can be made, armed conflict may have already evolved in a way that makes the changes obsolete.
See also: Attacks on humanitarian workers
Convention on Certain Conventional Weapons
Customary international humanitarian law
Declaration on the Protection of Women and Children in Emergency and Armed Conflict
Geneva Conference (disambiguation)
Geneva Academy of International Humanitarian Law and Human Rights
German Prisoners of War in the United States
Hague Conventions of 1899 and 1907 – traditional rules on fighting wars
Human rights
Human shield
International Committee of the Red Cross
International Federation of Red Cross and Red Crescent Societies
International humanitarian law
Laws of war
Lieber Code General Order 100
Nuremberg Principles
Reprisals
Rule of Law in Armed Conflicts Project
Saint Petersburg Declaration of 1868
Targeted killing
References:
Further reading: Matthew Evangelista and Nina Tannenwald (eds.). 2017. Do the Geneva Conventions Matter? Oxford University Press.
Giovanni Mantilla, "Conforming Instrumentalists: Why the USA and the United Kingdom Joined the 1949 Geneva Conventions," European Journal of International Law, Volume 28, Issue 2, May 2017, Pages 483–511.
Helen Kinsella, "The image before the weapon : a critical history of the distinction between combatant and civilian" Cornell University Press.
Boyd van Dijk (2022). Preparing for War: The Making of the Geneva Conventions. Oxford University Press.
External links:
The Geneva Conventions of 12 August 1949 public domain audiobook at LibriVox
Texts and commentaries of 1949 Conventions & Additional Protocols
The Geneva Conventions: the core of international humanitarian law, ICRC
Rules of war (in a nutshell)—video
Commentaries:
GCI: Commentary
GCII: Commentary
GCIII: Commentary
GCIV: Commentary |
mil_tactics_continued_pretraining.csv | Geneva Protocol | Negotiation history: In the Hague Conventions of 1899 and 1907, the use of dangerous chemical agents was outlawed. In spite of this, the First World War saw large-scale chemical warfare. France used tear gas in 1914, but the first large-scale successful deployment of chemical weapons was by the German Empire in Ypres, Belgium in 1915, when chlorine gas was released as part of a German attack at the Battle of Gravenstafel. Following this, a chemical arms race began, with the United Kingdom, Russia, Austria-Hungary, the United States, and Italy joining France and Germany in the use of chemical weapons.
This resulted in the development of a range of horrific chemicals affecting lungs, skin, or eyes. Some were intended to be lethal on the battlefield, like hydrogen cyanide, and efficient methods of deploying agents were invented. At least 124,000 tons were produced during the war. In 1918, about one grenade out of three was filled with dangerous chemical agents. Around 1.3 million casualties of the conflict were attributed to the use of gas, and the psychological effect on troops may have had a much greater effect.
As protective equipment developed, the technology to destroy such equipment became a part of the arms race. The use of deadly poison gas was not only limited to combatants in the front but also civilians, as nearby civilian towns were at risk from winds blowing the poison gases through. Civilians living in towns rarely had any warning systems about the dangers of poison gas, as well as not having access to effective gas masks. The use of chemical weapons employed by both sides had inflicted an estimated 100,000-260,000 civilian casualties during the conflict. Tens of thousands or more, along with military personnel, died from scarring of the lungs, skin damage, and cerebral damage in the years after the conflict ended. In 1920 alone, over 40,000 civilians and 20,000 military personnel died from the chemical weapons effects.
The Treaty of Versailles included some provisions that banned Germany from either manufacturing or importing chemical weapons. Similar treaties banned the First Austrian Republic, the Kingdom of Bulgaria, and the Kingdom of Hungary from chemical weapons, all belonging to the losing side, the Central powers. Russian bolsheviks and Britain continued the use of chemical weapons in the Russian Civil War and possibly in the Middle East in 1920.
Three years after World War I, the Allies wanted to reaffirm the Treaty of Versailles, and in 1922 the United States introduced the Treaty relating to the Use of Submarines and Noxious Gases in Warfare at the Washington Naval Conference. Four of the war victors, the United States, the United Kingdom, the Kingdom of Italy and the Empire of Japan, gave consent for ratification, but it failed to enter into force as the French Third Republic objected to the submarine provisions of the treaty.
At the 1925 Geneva Conference for the Supervision of the International Traffic in Arms the French suggested a protocol for non-use of poisonous gases. The Second Polish Republic suggested the addition of bacteriological weapons. It was signed on 17 June.
Historical assessment: Eric Croddy, assessing the Protocol in 2005, took the view that the historic record showed it had been largely ineffectual. Specifically it does not prohibit:
use against not-ratifying parties
retaliation using such weapons, so effectively making it a no-first-use agreement
use within a state's own borders in a civil conflict
research and development of such weapons, or stockpiling them
In light of these shortcomings, Jack Beard notes that "the Protocol (...) resulted in a legal framework that allowed states to conduct [biological weapons] research, develop new biological weapons, and ultimately engage in [biological weapons] arms races".
As such, the use of chemical weapons inside the nation's own territory against its citizens or subjects employed by Spain in the Rif War until 1927, Japan against Seediq indigenous rebels in Taiwan (then part of the Japanese colonial empire) in 1930 during the Musha Incident, Iraq against ethnic Kurdish civilians in the 1988 attack on Halabja during the Iran–Iraq War, and Syria or Syrian opposition forces during the Syrian civil war, nor use on Black Lives Matter protestors in the United States did not breach the Geneva Protocol.
Despite the U.S. having been a proponent of the protocol, the U.S. military and American Chemical Society lobbied against it, causing the U.S. Senate not to ratify the protocol until 1975, the same year when the United States ratified the Biological Weapons Convention.
Violations: Several state parties have deployed chemical weapons for combat in spite of the treaty. Italy used mustard gas against the Ethiopian Empire in the Second Italo-Ethiopian War. In World War II, Germany employed chemical weapons in combat on several occasions along the Black Sea, notably in Sevastopol, where they used toxic smoke to force Russian resistance fighters out of caverns below the city. They also used asphyxiating gas in the catacombs of Odesa in November 1941, following their capture of the city, and in late May 1942 during the Battle of the Kerch Peninsula in eastern Crimea, perpetrated by the Wehrmacht's Chemical Forces and organized by a special detail of SS troops with the help of a field engineer battalion. After the battle in mid-May 1942, the Germans gassed and killed almost 3,000 of the besieged and non-evacuated Red Army soldiers and Soviet civilians hiding in a series of caves and tunnels in the nearby Adzhimushkay quarry.
During the 1980-1988 Iran-Iraq War, Iraq is known to have employed a variety of chemical weapons against Iranian forces. Some 100,000 Iranian troops were casualties of Iraqi chemical weapons during the war.
Subsequent interpretation of the protocol: In 1966, United Nations General Assembly resolution 2162B called for, without any dissent, all states to strictly observe the protocol. In 1969, United Nations General Assembly resolution 2603 (XXIV) declared that the prohibition on use of chemical and biological weapons in international armed conflicts, as embodied in the protocol (though restated in a more general form), were generally recognized rules of international law. Following this, there was discussion of whether the main elements of the protocol now form part of customary international law, and now this is widely accepted to be the case.
There have been differing interpretations over whether the protocol covers the use of harassing agents, such as adamsite and tear gas, and defoliants and herbicides, such as Agent Orange, in warfare. The 1977 Environmental Modification Convention prohibits the military use of environmental modification techniques having widespread, long-lasting or severe effects. Many states do not regard this as a complete ban on the use of herbicides in warfare, but it does require case-by-case consideration. The 1993 Chemical Weapons Convention effectively banned riot control agents from being used as a method of warfare, though still permitting it for riot control.
In recent times, the protocol had been interpreted to cover non-international armed conflicts as well international ones. In 1995, an appellate chamber in the International Criminal Tribunal for the former Yugoslavia stated that "there had undisputedly emerged a general consensus in the international community on the principle that the use of chemical weapons is also prohibited in internal armed conflicts." In 2005, the International Committee of the Red Cross concluded that customary international law includes a ban on the use of chemical weapons in internal as well as international conflicts.
However, such views drew general criticism from legal authors. They noted that much of the chemical arms control agreements stems from the context of international conflicts. Furthermore, the application of customary international law to banning chemical warfare in non-international conflicts fails to meet two requirements: state practice and opinio juris. Jillian Blake & Aqsa Mahmud cited the periodic use of chemical weapons in non-international conflicts since the end of WWI (as stated above) as well as the lack of existing international humanitarian law (such as the Geneva Conventions) and national legislation and manuals prohibiting using them in such conflicts. Anne Lorenzat stated the 2005 ICRC study was rooted in "'political and operational issues rather than legal ones".
State parties: To become party to the Protocol, states must deposit an instrument with the government of France (the depositary power). Thirty-eight states originally signed the Protocol. France was the first signatory to ratify the Protocol on 10 May 1926. El Salvador, the final signatory to ratify the Protocol, did so on 26 February 2008. As of April 2021, 146 states have ratified, acceded to, or succeeded to the Protocol, most recently Colombia on 24 November 2015.
Reservations: A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying with respect to other parties to the Protocol and/or that these obligations would cease to apply with respect to any state, or its allies, which used the prohibited weapons. Several Arab states also declared that their ratification did not constitute recognition of, or diplomatic relations with, Israel, or that the provision of the Protocol were not binding with respect to Israel.
Generally, reservations not only modify treaty provisions for the reserving party, but also symmetrically modify the provisions for previously ratifying parties in dealing with the reserving party. |
mil_tactics_continued_pretraining.csv | Geneva Protocol | El Salvador, the final signatory to ratify the Protocol, did so on 26 February 2008. As of April 2021, 146 states have ratified, acceded to, or succeeded to the Protocol, most recently Colombia on 24 November 2015.
Reservations: A number of countries submitted reservations when becoming parties to the Geneva Protocol, declaring that they only regarded the non-use obligations as applying with respect to other parties to the Protocol and/or that these obligations would cease to apply with respect to any state, or its allies, which used the prohibited weapons. Several Arab states also declared that their ratification did not constitute recognition of, or diplomatic relations with, Israel, or that the provision of the Protocol were not binding with respect to Israel.
Generally, reservations not only modify treaty provisions for the reserving party, but also symmetrically modify the provisions for previously ratifying parties in dealing with the reserving party.: 394 Subsequently, numerous states have withdrawn their reservations, including the former Czechoslovakia in 1990 prior to its dissolution, or the Russian reservation on biological weapons that "preserved the right to retaliate in kind if attacked" with them, which was dissolved by President Yeltsin.
According to the Vienna Convention on Succession of States in respect of Treaties, states which succeed to a treaty after gaining independence from a state party "shall be considered as maintaining any reservation to that treaty which was applicable at the date of the succession of States in respect of the territory to which the succession of States relates unless, when making the notification of succession, it expresses a contrary intention or formulates a reservation which relates to the same subject matter as that reservation." While some states have explicitly either retained or renounced their reservations inherited on succession, states which have not clarified their position on their inherited reservations are listed as "implicit" reservations.
Reservations
Notes
Non-signatory states: The remaining UN member states and UN observers that have not acceded or succeeded to the Protocol are:
Chemical weapons prohibitions:
References:
Further reading: Frederic Joseph Brown (2005). "Chapter 3: The Evolution of Policy 1922-1939 / Geneva Gas Protocol". Chemical warfare: a study in restraints. Transaction Publishers. pp. 98–110. ISBN 1-4128-0495-7.
Bunn, George. "Gas and germ warfare: international legal history and present status." Proceedings of the National Academy of Sciences of the United States of America 65.1 (1970): 253+. online
Webster, Andrew. "Making Disarmament Work: The implementation of the international disarmament provisions in the League of Nations Covenant, 1919–1925." Diplomacy and Statecraft 16.3 (2005): 551–569.
External links:
The text of the protocol Archived 7 September 2013 at the Wayback Machine
Weapons of War: Poison Gas |
mil_tactics_continued_pretraining.csv | Grand strategy | Definition: There is no universally accepted definition of grand strategy. One common definition is that grand strategy is a state's strategy of how means (military and nonmilitary) can be used to advance and achieve national interests in the long-term.
Grand strategy expands on the traditional idea of strategy in three ways:
expanding strategy beyond military means to include diplomatic, financial, economic, informational, etc. means
examining internal in addition to external forces – taking into account both the various instruments of power and the internal policies necessary for their implementation (conscription, for example)
including consideration of periods of peacetime in addition to wartime
Thinkers differ as to whether grand strategy should serve to promote peace (as emphasized by B. H. Liddell Hart) or advance the security of a state (as emphasized by Barry Posen).
British military historian B. H. Liddell Hart played an influential role in popularizing the concept of grand strategy in the mid-20th century. Subsequent definitions tend to build on his. He defines grand strategy as follows:
[T]he role of grand strategy – higher strategy – is to co-ordinate and direct all the resources of a nation, or band of nations, towards the attainment of the political object of the war – the goal defined by fundamental policy.
Grand strategy should both calculate and develop the economic resources and man-power of nations in order to sustain the fighting services. Also the moral resources – for to foster the people's willing spirit is often as important as to possess the more concrete forms of power. Grand strategy, too, should regulate the distribution of power between the several services, and between the services and industry. Moreover, fighting power is but one of the instruments of grand strategy – which should take account of and apply the power of financial pressure, and, not least of ethical pressure, to weaken the opponent's will. ...
Furthermore, while the horizons of strategy is bounded by the war, grand strategy looks beyond the war to the subsequent peace. It should not only combine the various instruments, but so regulate their use as to avoid damage to the future state of peace – for its security and prosperity.
History: In antiquity, the Greek word "strategy" referred to the skills of a general. By the sixth century, Byzantines distinguished between "strategy" (the means by which a general defends the homeland and defeats the enemy) and "tactics" (the science of organizing armies). Byzantine Emperor Leo VI distinguished between the two terms in his work Taktika.
Prior to the French Revolution, most thinkers wrote on military science rather than grand strategy. The term grand strategy first emerged in France in the 19th century. Jacques Antoine Hippolyte, Comte de Guibert, wrote an influential work, General Essay on Tactics, that distinguished between "tactics" and "grand tactics" (which scholars today would refer to as grand strategy). Emperor Leo's Taktika was shortly thereafter translated into French and German, leading most thinkers to distinguish between tactics and strategy.
Carl von Clausewitz proposed in an influential work that politics and war were intrinsically linked. Clausewitz defined strategy as "the use of engagements for the object of the war". Antoine-Henri Jomini argued that because of the intrinsically political nature of war that different types of wars (e.g. offensive wars, defensive wars, wars of expediency, wars with/without allies, wars of intervention, wars of conquest, wars of opinion, national wars, civil wars) had to be waged differently, thus creating the need for a grand strategy. Some contemporaries of Clausewitz and Jomini disputed the links between politics and war, arguing that politics ceases to be important once war has begun.
Narrow definitions, similar to those of Clausewitz, were commonplace during the 19th century. Towards the end of the 19th century and into the early 20th century (in particular with B. H. Liddell Hart's writings), some writers expanded the definition of strategy to refer to the distribution and application of military means to achieve policy objectives. For these thinkers, grand strategy was not only different from the operational strategy of winning a particular battle, but it also encompassed both peacetime and wartime policies. For them, grand strategy should operate for decades (or longer) and should not cease at war's end or begin at war's start.
In the 20th century, some thinkers argued that all manners of actions (political, economic, military, cultural) counted as grand strategy in an era of total warfare. However, most definitions saw a division of labor between the actions of political leaders and those of the executing military.
According to Helmuth von Moltke, the initial task of strategy was to serve politics and the subsequent task was to prepare the means to wage war. Moltke however warned that plans may not survive an encounter with the enemy. Other thinkers challenged Clausewitz's idea that politics could set the aims of war, as the aims of war would change during the war given the success or failure of military operations.These thinkers argued that strategy was a process that required adaptation to changing circumstances.
Scholarship on grand strategy experienced a resurgence in the late 1960s and 1970s. Bernard Brodie defined strategy as "guide to accomplishing something and doing it efficiently... a theory for action".
Historical examples: According to historian Hal Brands, "all states... do grand strategy, but many of them do not do it particularly well."
Peloponnesian War: One of the earlier writings on grand strategy comes from Thucydides's History of the Peloponnesian War, an account of the war between the Peloponnesian League (led by Sparta) and the Delian League (led by Athens).
Roman Empire: From the era of Hadrian, Roman emperors employed a military strategy of "preclusive security—the establishment of a linear barrier of perimeter defence around the Empire. The Legions were stationed in great fortresses".
These "fortresses" existed along the perimeter of the Empire, often accompanied by actual walls (for example, Hadrian's Wall). Due to the perceived impenetrability of these perimeter defenses, the Emperors kept no central reserve army. The Roman system of roads allowed for soldiers to move from one frontier to another (for the purpose of reinforcements during a siege) with relative ease. These roads also allowed for a logistical advantage for Rome over her enemies, as supplies could be moved just as easily across the Roman road system as soldiers. This way, if the legions could not win a battle through military combat skill or superior numbers, they could simply outlast the invaders, who, as historian E.A. Thompson wrote, "Did not think in terms of millions of bushels of wheat."
The emperor Constantine moved the legions from the frontiers to one consolidated roving army as a way to save money and to protect wealthier citizens within the cities. However, this grand strategy, according to some ancient sources, had costly effects on the Roman empire by weakening its frontier defenses and allowing it to be susceptible to outside armies coming in. Also, people who lived near the Roman frontiers would begin to look to the barbarians for protection after the Roman armies departed. This argument is considered to have originated in the writings of Eunapius As stated by the 5th century AD historian Zosimus: Constantine abolished this frontier security by removing the greater part of the soldiery from the frontiers to cities that needed no auxiliary forces. He thus deprived of help the people who were harassed by the barbarians and burdened tranquil cities with the pest of the military, so that several straightway were deserted. Moreover, he softened the soldiers who treated themselves to shows and luxuries. Indeed, to speak plainly, he personally planted the first seeds of our present devastated state of affairs – Zosimus
This charge by Zosimus is considered to be a gross exaggeration and inaccurate assessment of the situations in the fourth century under Constantine by many modern historians. B.H. Warmington, for instance, argues that the statement by Zosimus is "[an] oversimplification," reminding us that "the charge of exposure of the frontier regions is at best anachronistic and probably reflects Zosimus' prejudices against Constantine; the corruption of the soldiers who lived in the cities was a literary commonplace."
World War II: An example of modern grand strategy is the decision of the Allies in World War II to concentrate on the defeat of Germany first. The decision, a joint agreement made after the attack on Pearl Harbor (1941) had drawn the US into the war, was a sensible one in that Germany was the most powerful member of the Axis, and directly threatened the existence of the United Kingdom and the Soviet Union. Conversely, while Japan's conquests garnered considerable public attention, they were mostly in colonial areas deemed less essential by planners and policy-makers. The specifics of Allied military strategy in the Pacific War were therefore shaped by the lesser resources made available to the theatre commanders.
Cold War: The US and the UK used a policy of containment as part of their grand strategy during the Cold War. |
mil_tactics_continued_pretraining.csv | Grand strategy | World War II: An example of modern grand strategy is the decision of the Allies in World War II to concentrate on the defeat of Germany first. The decision, a joint agreement made after the attack on Pearl Harbor (1941) had drawn the US into the war, was a sensible one in that Germany was the most powerful member of the Axis, and directly threatened the existence of the United Kingdom and the Soviet Union. Conversely, while Japan's conquests garnered considerable public attention, they were mostly in colonial areas deemed less essential by planners and policy-makers. The specifics of Allied military strategy in the Pacific War were therefore shaped by the lesser resources made available to the theatre commanders.
Cold War: The US and the UK used a policy of containment as part of their grand strategy during the Cold War.
In the United States: The conversation around grand strategy in the United States has evolved significantly since the country's founding, with the nation shifting from a strategy of continental expansion, isolation from European conflicts, and opposition to European empires in the Western hemisphere in its first century, to a major debate about the acquisition of an empire in the 1890s (culminating in the conquest of the Philippines and Cuba during the Spanish–American War), followed by rapid shifts between offshore balancing, liberal internationalism, and isolationism around the world wars. The Cold War saw increasing use of deep, onshore engagement strategies (including the creation of a number of permanent alliances, significant involvement in other states' internal politics, and a major counterinsurgency war in Vietnam.) With the end of the Cold War, an early strategic debate eventually coalesced into a strategy of primacy, culminating in the invasion of Iraq in 2003. The aftershocks of this war, along with an economic downturn, rising national debt, and deepening political gridlock, have led to a renewed strategic debate, centered on two major schools of thought: primacy and restraint. A return to offshore balancing has also been proposed by prominent political scientists Stephen Walt and John Mearsheimer.
In the 1990s: The end of the Cold War and the collapse of the Soviet Union removed the focal point of U.S. strategy: containing the Soviet Union. A major debate emerged about the future direction of U.S. foreign policy. In a 1997 article, Barry R. Posen and Andrew L. Ross identified four major grand strategic alternatives in the debate:
neo-isolationism
selective engagement
cooperative security
primacy
Neo-isolationism: Stemming from a defensive realist understanding of international politics, what the authors call "neo-isolationism" advocates the United States remove itself from active participation in international politics in order to maintain its national security. It holds that because there are no threats to the American homeland, the United States does not need to intervene abroad. Stressing a particular understanding of nuclear weapons, the authors describe how proponents believe the destructive power of nuclear weapons and retaliatory potential of the United States assure the political sovereignty and territorial integrity of the United States, while the proliferation of such weapons to countries like Britain, France, China and Russia prevents the emergence of any competing hegemon on the Eurasian landmass. The United States' security and the absence of threats means that "national defense will seldom justify intervention abroad." Even further, its proponents argue that "the United States is not responsible for, and cannot afford the costs of, maintaining world order." They also believe that "the pursuit of economic well-being is best left to the private sector," and that the United States should not attempt to spread its values because doing so increases resentment towards the U.S. and in turn, decreases its security. In short, neo-isolationism advises the United States to preserve its freedom of action and strategic independence.
In more practical terms, the authors discuss how the implementation of a so-called "neo-isolationist" grand strategy would involve less focus on the issue of nuclear proliferation, withdrawal from NATO, and major cuts to the United States military presence abroad. The authors see a military force structure that prioritizes a secure nuclear second-strike capability, intelligence, naval and special operations forces while limiting the forward-deployment of forces to Europe and Asia.
Posen and Ross identify such prominent scholars and political figures as Earl Ravenal, Patrick Buchanan and Doug Bandow.
Selective engagement: With similar roots in the realist tradition of international relations, selective engagement advocates that the United States should intervene in regions of the world only if they directly affect its security and prosperity. The focus, therefore, lies on those powers with significant industrial and military potential and the prevention of war amongst those states. Most proponents of this strategy believe Europe, Asia and the Middle East matter most to the United States. Europe and Asia contain the great powers, which have the greatest military and economic impact on international politics, and the Middle East is a primary source of oil for much of the developed world. In addition to these more particular concerns, selective engagement also focuses on preventing nuclear proliferation and any conflict that could lead to a great power war, but provides no clear guidelines for humanitarian interventions.
The authors envision that a strategy of selective engagement would involve a strong nuclear deterrent with a force structure capable of fighting two regional wars, each through some combination of ground, air and sea forces complemented with forces from a regional ally. They question, however, whether such a policy could garner sustained support from a liberal democracy experienced with a moralistic approach to international relations, whether the United States could successfully differentiate necessary versus unnecessary engagement and whether a strategy that focuses on Europe, Asia and the Middle East actually represents a shift from current engagement.
In the piece, Barry Posen classified himself as a "selective engagement" advocate, with the caveat that the United States should not only act to reduce the likelihood of great power war, but also oppose the rise of a Eurasian hegemon capable of threatening the United States.
Robert J. Art argues that selective engagement is the best strategy for the twenty-first century because it is, by definition, selective. "It steers the middle course between an isolationist, unilateralist course, on the one hand, and world policeman, highly interventionist role, on the other." Therefore, Art, concludes, it avoids both overly restrictive and overly expansive definitions of U.S. interests, finding instead a compromise between doing too much and too little militarily. Additionally, selective engagement is the best strategy for achieving both realist goals—preventing WMD terrorism, maintaining great power peace, and securing the supply of oil; and liberal goals—preserving free trade, spreading democracy, observing human rights, and minimizing the impact of climate change. The realist goals represent vital interests and the liberal goals represent desirable interests. Desirable interests are not unimportant, Art maintains, but they are of lesser importance when a trade-off between them and vital interests must be made. Selective engagement, however, mitigates the effect of the trade-off precisely because it is a moderate, strategic policy.
Cooperative security: The authors write "the most important distinguishing feature of cooperative security is the proposition that peace is effectively indivisible." Unlike the other three alternatives, cooperative security draws upon liberalism as well as realism in its approach to international relations. Stressing the importance of world peace and international cooperation, the view supposes the growth in democratic governance and the use of international institutions will hopefully overcome the security dilemma and deter interstate conflict. Posen and Ross propose that collective action is the most effective means of preventing potential state and non-state aggressors from threatening other states. Cooperative security considers nuclear proliferation, regional conflicts and humanitarian crises to be major interests of the United States.
The authors imagine that such a grand strategy would involve stronger support for international institutions, agreements, and the frequent use of force for humanitarian purposes. Were international institutions to ultimately entail the deployment of a multinational force, the authors suppose the United States' contribution would emphasize command, control, communications and intelligence, defense suppression, and precision-guided munitions-what they considered at the time to be the United States' comparative advantage in aerospace power. Collective action problems, the problems of the effective formation of international institutions, the vacillating feelings of democratic populations, and the limitations of arms control are all offered by the authors as noted criticisms of collective security.
Primacy: Primacy is a grand strategy with four parts:
Military preponderance
Reassurances and containment of allies
Integration of other states into US-designed institutions
Limits to the spread of nuclear weapons
As a result, it advocates that the United States pursue ultimate hegemony and dominate the international system economically, politically and militarily, rejecting any return to bipolarity or multipolarity and preventing the emergence of any peer competitor. Therefore, its proponents argue that U.S. foreign policy should focus on maintaining U.S. power and preventing any other power from becoming a serious challenger to the United States. With this in mind, some supporters of this strategy argue that the U.S. should work to contain China and other competitors rather than engage them. In regards to humanitarian crises and regional conflicts, primacy holds that the U.S. should only intervene when they directly impact national security, more along the lines of selective engagement than collective security. It does, however, advocate for the active prevention of nuclear proliferation at a level similar to collective security.
Implementation of such a strategy would entail military forces at similar levels to those during the Cold War, with emphasis on military modernization and research and development. They note, however, that "the quest for primacy is likely to prove futile for five reasons": the diffusion of economic and technological capabilities, interstate balancing against the United States, the danger that hegemonic leadership will fatally undermine valuable multilateral institutions, the feasibility of preventive war and the dangers of imperial overstretch. |
mil_tactics_continued_pretraining.csv | Grand strategy | With this in mind, some supporters of this strategy argue that the U.S. should work to contain China and other competitors rather than engage them. In regards to humanitarian crises and regional conflicts, primacy holds that the U.S. should only intervene when they directly impact national security, more along the lines of selective engagement than collective security. It does, however, advocate for the active prevention of nuclear proliferation at a level similar to collective security.
Implementation of such a strategy would entail military forces at similar levels to those during the Cold War, with emphasis on military modernization and research and development. They note, however, that "the quest for primacy is likely to prove futile for five reasons": the diffusion of economic and technological capabilities, interstate balancing against the United States, the danger that hegemonic leadership will fatally undermine valuable multilateral institutions, the feasibility of preventive war and the dangers of imperial overstretch.
Daniel Drezner, professor of international politics at Tufts University, outlines three arguments offered by primacy enthusiasts contending that military preeminence generates positive economic externalities. "One argument, which I label 'geoeconomic favoritism,' hypothesizes that the military hegemon will attract private capital because it provides the greatest security and safety to investors. A second argument posits that the benefits from military primacy flow from geopolitical favoritism: that sovereign states, in return for living under the security umbrella of the military superpower, voluntarily transfer resources to help subsidize the cost of the economy. The third argument postulates that states are most likely to enjoy global public goods under a unipolar distribution of military power, accelerating global economic growth and reducing security tensions. These public goods benefit the hegemon as much, if not more, than they do other actors." Drezner maintains the empirical evidence supporting the third argument is the strongest, though with some qualifiers. "Although the precise causal mechanism remain disputed, hegemonic eras are nevertheless strongly correlated with lower trade barriers and greater levels of globalization." However, Drezner highlights a caveat: The cost of maintaining global public goods catches up to the superpower providing them. "Other countries free-ride off of the hegemon, allowing them to grow faster. Technologies diffuse from the hegemonic power to the rest of the world, facilitating catch-up. Chinese analysts have posited that these phenomena, occurring right now, are allowing China to outgrow the United States."
Primacy vs. selective engagement: Barry Posen, director of the Security Studies Program at the Massachusetts Institute of Technology, believes the activist U.S. foreign policy that continues to define U.S. strategy in the twenty-first century is an "undisciplined, expensive, and bloody strategy" that has done more harm than good to U.S. national security. "It makes enemies almost as fast as it slays them, discourages allies from paying for their own defense, and convinces powerful states to band together and oppose Washington's plans, further raising the costs of carrying out its foreign policy." The United States was able to afford such adventurism during the 1990s, Posen argues, because American power projection was completely unchallenged. Over the last decade, however, American power has been relatively declining while the Pentagon continues to "depend on continuous infusions of cash simply to retain its current force structure—levels of spending that the Great Recession and the United States' ballooning debt have rendered unsustainable."
Posen proposes the United States abandon its hegemonic strategy and replace it with one of restraint. This translates into jettisoning the quest of shaping a world that is satisfactory to U.S. values and instead advances vital national security interests: The U.S. military would go to war only when it must. Large troop contingents in unprecedentedly peaceful regions such as Europe would be significantly downsized, incentivizing NATO members to provide more for their own security. Under such a scenario, the United States would have more leeway in using resources to combat the most pressing threats to its security. A strategy of restraint, therefore, would help preserve the country's prosperity and security more so than a hegemonic strategy. To be sure, Posen makes clear that he is not advocating isolationism. Rather, the United States should focus on three pressing security challenges: preventing a powerful rival from upending the global balance of power, fighting terrorists, and limiting nuclear proliferation.
John Ikenberry of Princeton University and Stephen Brooks and William Wohlforth, both of Dartmouth College, push back on Posen's selective engagement thesis, arguing that American engagement is not as bad as Posen makes it out to be. Advocates of selective engagement, they argue, overstate the costs of current U.S. grand strategy and understate the benefits. "The benefits of deep engagement...are legion. U.S. security commitments reduce competition in key regions and act as a check against potential rivals. They help maintain an open world economy and give Washington leverage in economic negotiations. And they make it easier for the United States to secure cooperation for combating a wide range of global threats."
Ikenberry, Brooks, and Wohlforth are not convinced that the current U.S. grand strategy generates subsequent counterbalancing. Unlike the prior hegemons, the United States is geographically isolated and faces no contiguous great power rivals interested in balancing it. This means the United States is far less threatening to great powers that are situated oceans away, the authors claim. Moreover, any competitor would have a hard time matching U.S. military might. "Not only is the United States so far ahead militarily in both quantitative and qualitative terms, but its security guarantees also give it the leverage to prevent allies from giving military technology to potential U.S. rivals. Because the United States dominates the high-end defense industry, it can trade access to its defense market for allies' agreement not to transfer key military technologies to its competitors."
Finally, when the United States wields its security leverage, the authors argue, it shapes the overall structure of the global economy. "Washington wins when U.S. allies favor [the] status quo, and one reason they are inclined to support the existing system is because they value their military alliances."
Ted Carpenter, senior fellow at the Cato Institute, believes that the proponents of primacy suffer from the "light-switch model," in which only two positions exist: on and off. "Many, seemingly most, proponents of U.S. preeminence do not recognize the existence of options between current policy of promiscuous global interventionism and isolationism." Adherence to the light switch model, Carpenter argues, reflects intellectual rigidity or an effort to stifle discussion about a range of alternatives to the status quo. Selective engagement is a strategy that sits in between primacy and isolationism and, given growing multipolarity and American fiscal precariousness, should be taken seriously. "Selectivity is not merely an option when it comes to embarking on military interventions. It is imperative for a major power that wishes to preserve its strategic insolvency. Otherwise, overextension and national exhaustion become increasing dangers." Carpenter thinks that off-loading U.S. security responsibility must be assessed on a case-by-case basis. Nevertheless, the United States must refrain from using military might in campaigns that do not directly deal with U.S. interests. "If a sense of moral indignation, instead of a calculating assessment of the national interest, governs U.S. foreign policy, the United States will become involved in even more murky conflicts in which few if any tangible American interests are at stake."
Today: Posen has argued that the four schools of U.S. grand strategy that he identified in the 1990s have been replaced by just two: liberal hegemony, which came from a fusion of primacy and cooperative security, and restraint, which came from a fusion of neo-isolationism and selective engagement. Other scholars have proposed a third policy, offshore balancing.
Liberal hegemony: Proponents of liberal hegemony favor a world order in which the United States is a hegemon and uses this power advantage to create a liberal international system and at times use force to enforce or spread liberal values (such as individual rights, free trade, and the rule of law). The United States strives to retain overwhelming military power, under a theory that potential competitors will not even try to compete on the global stage. It also retains an extensive network of permanent alliance commitments around the world, using the alliance system both to advance and retain hegemonic power and to solidify emerging liberal political systems. According to Posen, this strategy sees "threats emanating from three major sources: failed states, rogue states, and illiberal peer competitors." Failed states, in this view, are sources of instability; rogue states can sponsor terrorism, acquire weapons of mass destruction, and behave unpredictably; illiberal peer competitors would compete directly with the United States and "would complicate the spread of liberal institutions and the construction of liberal states." Support for liberal hegemonic strategies among major thinkers in both political parties helps explain the broad elite support for the 2003 invasion of Iraq and the 2011 intervention in Libya, even though U.S. military involvement in those conflicts had been initiated by presidents of different parties. The chief difference on foreign policy between Republican and Democratic proponents of liberal hegemony, according to Posen, is on support for international institutions as a means to achieving hegemony.
Restraint: Proponents of a grand strategy of restraint call for the United States to significantly reduce its overseas security commitments and largely avoid involvement in conflicts abroad. |
mil_tactics_continued_pretraining.csv | Grand strategy | Failed states, in this view, are sources of instability; rogue states can sponsor terrorism, acquire weapons of mass destruction, and behave unpredictably; illiberal peer competitors would compete directly with the United States and "would complicate the spread of liberal institutions and the construction of liberal states." Support for liberal hegemonic strategies among major thinkers in both political parties helps explain the broad elite support for the 2003 invasion of Iraq and the 2011 intervention in Libya, even though U.S. military involvement in those conflicts had been initiated by presidents of different parties. The chief difference on foreign policy between Republican and Democratic proponents of liberal hegemony, according to Posen, is on support for international institutions as a means to achieving hegemony.
Restraint: Proponents of a grand strategy of restraint call for the United States to significantly reduce its overseas security commitments and largely avoid involvement in conflicts abroad. America would take advantage of what Posen calls a "remarkably good" strategic position: "[The United States] is rich, distant from other great powers, and defended by a powerful nuclear deterrent. Other great powers are at present weaker than the United States, close to one another, and face the same pressures to defend themselves as does the United States." Proponents of strategic restraint argue, consistent with the realist tradition, that states are self-interested and accordingly will look out for their own interests and balance against aggressors; however, when possible, states prefer to "free ride" or "cheap ride," passing the buck to other states to bear the cost of balancing. Restraint proponents also emphasize the deterrent power of nuclear weapons, which tremendously raise the stakes of confrontations between great powers, breeding caution, rather than rewarding aggression. Restraint advocates see nationalism as a powerful force, one that makes states even more resistant to outside conquest and thus makes the international system more stable. Restraint proponents also argue, drawing on thinkers like the Prussian strategist Carl von Clausewitz, that military force is a blunt, expensive, and unpredictable instrument, and that it accordingly should only be used rarely, for clear goals.
Restraint is distinct from isolationism: isolationists favor restricting trade and immigration and tend to believe that events in the outside world have little impact within the United States. As already noted, it is sometimes confused with non-interventionism. Restraint, however, sees economic dynamism as a key source of national power and accordingly tends to argue for a relatively open trade system. Some restrainers call for supporting this trade system via significant naval patrols; others suggest that the international economy is resilient against disruptions and, with rare exceptions, does not require a powerful state to guarantee the security of global trade.
Offshore balancing: In offshore balancing, the United States would refrain from significant involvement in security affairs overseas except to prevent a state from establishing hegemony in what offshore balancers identify as the world's three key strategic regions: Europe, Northeast Asia, and the Persian Gulf. This strategy advocates a significantly reduced overseas presence compared to liberal hegemony, but argues that intervention is necessary in more circumstances than restraint. Offshore balancing is associated with offensive realist theories of state behavior: it believes that conquest can often enable states to gain power, and thus that a hegemon in regions with large economies, high populations, or critical resources could quickly become a global menace to U.S. national interests.
See also:
References:
Sources: Heuser, Beatrice (2010). The Evolution of Strategy. doi:10.1017/cbo9780511762895. ISBN 978-0-521-19968-1.
Kennedy, Paul M. (1991). Grand Strategies in War and Peace. Yale University Press. ISBN 978-0-300-05666-2.
Posen, Barry R. (2014). Restraint: A New Foundation for U.S. Grand Strategy. Cornell University Press. ISBN 978-0-8014-7086-8.
Platias, Athanassios; Koliopoulos, Constantinos (2017). Thucydides on Strategy: Grand Strategies in the Peloponnesian War and Their Relevance Today. Oxford University Press. ISBN 978-0-19-754805-9.
Further reading: Gaddis, John Lewis (2018). On Grand Strategy. United States: Penguin Press. ISBN 978-1594203510.
Art, Robert J (2004). A Grand Strategy for America. Cornell University Press. ISBN 978-0-8014-8957-0.
Biddle, Stephen. American Grand Strategy After 9/11: An Assessment Archived 2018-04-26 at the Wayback Machine. April 2005
Clausewitz, Carl von. On War
Liddell Hart, B. H. Strategy. London:Faber, 1967 (2nd rev. ed.)
Luttwak, E. The Grand strategy of the Roman Empire
Papasotiriou, Harry. Grand Strategy of the Byzantine Empire
Borgwardt, Elizabeth; Nichols, Christopher Mcknight; Preston, Andrew, eds. (2021). Rethinking American Grand Strategy. doi:10.1093/oso/9780190695668.001.0001. ISBN 978-0-19-069566-8. |
mil_tactics_continued_pretraining.csv | Green-water navy | Definitions: The elements of maritime geography are loosely defined and their meanings have changed throughout history. The US's 2010 Naval Operations Concept defines blue water as "the open ocean", green water as "coastal waters, ports and harbors", and brown water as "navigable rivers and their estuaries". Robert Rubel of the US Naval War College includes bays in his definition of brown water, and in the past US military commentators have extended brown water out to 100 nautical miles (190 km) from shore.
During the Cold War, green water denoted those areas of ocean in which naval forces might encounter land-based aircraft. The development of long-range bombers with anti-ship missiles turned most of the oceans to "green" and the term all but disappeared. After the Cold War, US amphibious task forces were sometimes referred to as the green-water navy, in contrast to the blue-water carrier battle groups. This distinction disappeared as increasing threats in coastal waters forced the amphibious ships further offshore, delivering assaults by helicopter and tiltrotor from over the horizon. This prompted the development of ships designed to operate in such waters – the Zumwalt-class destroyer and the littoral combat ships; modeling has suggested that current NATO frigates are vulnerable to swarms of 4-8 small boats in green water. Rubel has proposed redefining green water as those areas of ocean which are too dangerous for high-value units, requiring offensive power to be dispersed into smaller vessels such as submarines that can use stealth and other characteristics to survive. Under his scheme, brown water would be zones in which ocean-going units could not operate at all, including rivers, minefields, straits, and other choke points.
As the preeminent blue-water navy of the early 21st century, the US Navy is able to define maritime geography in terms of offensive action in the home waters of its enemies, without being constrained by logistics. This is not true for most other navies, whose supply chains and air cover typically limit them to power projection within a few hundred kilometers of home territory. A number of countries are working on overcoming these constraints. Other authors have started to apply the term "green-water navy" to any national navy that has ocean-going ships but lacks the logistical support needed for a blue-water navy. It is often not clear what they mean, as the term is used without consistency or precision.
A green-water navy does not mean that the individual ships of the fleet are unable to function away from the coast or in open ocean: instead, it suggests that due to logistical reasons they are unable to be deployed for lengthy periods and must have aid from other countries to sustain long term deployments. Also, the term "green-water navy" is subjective as numerous countries that do not have a true green-water navy maintain naval forces that are on par with countries that are recognized as having green-water navies. For example, the German Navy has near the same capability as the Canadian Navy but is not recognized as a true green-water navy. Another example is the Portuguese Navy that, despite being usually classified as a minor navy, has several times conducted sustained operations in faraway regions typical of the green-water navies. However, the differences between blue-water navies and brown or green-water navies are usually quite noticeable, for example, the US Navy was able to quickly respond to the disappearance of Malaysia Airlines Flight 370 and continue operations in the region with relative ease even though the search area covered the Indian Ocean. In contrast, in 2005 the then green-water Russian Navy was unable to properly respond when its AS-28 rescue vehicle became tangled in undersea cables unable to surface, relying on the blue-water Royal Navy to respond and carry out the rescue in time.
Just as states build up naval capability, some lose it. For example, the Austro-Hungarian Navy was a modern green water navy of the time, but as the countries lost their coasts during World War I, their navies were confiscated, and their ports became parts of Italy and Yugoslavia. The Axis powers lost naval capabilities after their defeat in World War II, with most of Japan's Imperial Navy and Germany's Navy being disarmed and their troop and ship numbers capped and monitored by the Allies. The collapse of the USSR also brought with it the collapse of the second-largest naval force in the world, and the largest submarine force in the world. Although the Russian Federation made sure to inherit the most capable ships, passing most older models to successor states, as it had lost the logistical capabilities of the Soviet Navy, it was no longer able to operate away from Russian shores for extended periods of time. Moreover, budget cuts forced large cuts in the submarine force, such as the retirements of the Typhoon-class submarine. As the Soviet Navy was built largely around submarine warfare the losses in the submarine capability have adversely affected the capability of the newly formed Russian Navy as well.
Examples:
Australia: The Royal Australian Navy is well established as a green-water navy. The navy sustains a broad range of maritime operations, from the Middle East to the Pacific Ocean, often as part of international or allied coalitions. The RAN operates a modern fleet, consisting of destroyers, frigates, conventional submarines as well as an emerging amphibious and power projection capability based on the commissioning of HMAS Choules and two Canberra-class landing helicopter docks:
Carrier / Amphibious capability – 27,000 tonne HMAS Canberra and HMAS Adelaide
Amphibious capability – 16,190 tonne HMAS Choules.
Replenishment capability – 19,500 tonne HMAS Supply and HMAS Stalwart
Brazil: The Brazilian Navy has frequently been dubbed a "green-water" force by experts. The navy is primarily focused on securing the country's littorals and exclusive economic zone (EEZ), but also maintains the capacity to operate in the wider South Atlantic Ocean. Since the early 2000s, the Brazilian Navy has contributed to a number of peacekeeping and humanitarian missions:
Helicopter Carrier and amphibious capability – 21,000 tonne Atlântico.
Amphibious capability – 12,000 tonne Bahia, 8,757 tonne Newport-class tank landing ship, two 8,571 tonne Round Table-class landing ship logistics
Replenishment capability – 10,000 tone Almirante Gastão Motta.
Canada: According to the criteria as outlined in the 2001 publication, "Leadmark: The Navy's Strategy for 2020", the Royal Canadian Navy had met its description of a 3rd tier "Medium Global Force Projection Navy" – a green-water navy with the capacity to project force worldwide with the aid of more powerful maritime allies (e.g. United Kingdom, France and the United States). In this context, the Royal Canadian Navy ranked itself alongside the navies of Australia and the Netherlands:
Replenishing capability: MV Asterix, a dual civilian-military crewed replenishing oiler. This is an interim vessel which will provide at-sea replenishment until two new AORs (Protecteur-class auxiliary vessels) are completed around 2023-2025.
Japan: The Japan Maritime Self-Defense Force is considered to be a green-water navy. Overseas JMSDF deployments include participation in the Combined Task Force 150, and an additional task force in the Indian Ocean from 2009 to combat piracy in Somalia. The first postwar overseas naval air facility of Japan was established next to Djibouti-Ambouli International Airport:
Helicopter carrier capability – two 19,000 tonne Hyūga-class helicopter destroyers and two 27,430 tonnes Izumo-class helicopter destroyers. – Can be modified to carry fixed wing aircraft.
Amphibious capability – three 14,000 tonne Ōsumi-class tank landing ships.
Replenishment capability – two 25,000 tonne Mashu class and three 15,000 tonne Towada class.
The Netherlands: The Royal Netherlands Navy has been officially described as a 3rd tier "Medium Global Force Projection Navy" – or a green-water navy with the capacity to project force worldwide with the aid of more powerful maritime allies (e.g. Britain, France and the United States). In this context, the Royal Netherlands Navy ranks alongside the navies of Australia and Canada, while the USN is a 1st tier global blue-water navy and Britain and France are 2nd tier blue-water navies. For many years since the end of the Cold War, the Royal Netherlands Navy has been changing its role from national defence to overseas intervention:
Amphibious capability – 12,750 tonne HNLMS Rotterdam and the 16,800 tonne HNLMS Johan de Witt.
Replenishment capability – 27,800 tonne Karel Doorman (Also has amphibious capabilities), plus combat support ship Den Helder (building; projected service entry 2024).
Spain: The Spanish Navy is a green-water navy, and participates in joint operations with NATO and European allies around the world. The fleet has 54 commissioned ships, including; one amphibious assault ship (also used as an aircraft carrier), two amphibious transport docks, 5 AEGIS destroyers (5 more under construction), 6 frigates, 7 corvettes (2 more under construction) and three conventional submarines. (4 under construction)
Amphibious/carrier capability – 26,000 tonne Juan Carlos I.
Amphibious capability – two 13,815 tonne Galicia-class landing platform docks. |
mil_tactics_continued_pretraining.csv | Green-water navy | Replenishment capability – 27,800 tonne Karel Doorman (Also has amphibious capabilities), plus combat support ship Den Helder (building; projected service entry 2024).
Spain: The Spanish Navy is a green-water navy, and participates in joint operations with NATO and European allies around the world. The fleet has 54 commissioned ships, including; one amphibious assault ship (also used as an aircraft carrier), two amphibious transport docks, 5 AEGIS destroyers (5 more under construction), 6 frigates, 7 corvettes (2 more under construction) and three conventional submarines. (4 under construction)
Amphibious/carrier capability – 26,000 tonne Juan Carlos I.
Amphibious capability – two 13,815 tonne Galicia-class landing platform docks.
Replenishment capability – 17,045 tonne Patiño and the 19,500 tonne Cantabria replenishment ships.
South Korea: The Republic of Korea Navy is considered to be a green-water navy. In 2011, the government authorized the building of a naval base on Jeju Island to support the new Dokdo-class amphibious assault ships, the base will also be capable of supporting joint forces with the US Navy. A ski-jump for the operation of V/STOL jet fighters is being considered for the second ship of the Dokdo class. The Korean government is considering to buy surplus Harriers as a possible interim for the F-35 Lightning II if they choose to operate VTOL aircraft at all. On December 3, 2021, the National Assembly passed the budget to fund a fixed-wing aircraft carrier tentatively named CVX-class aircraft carrier capable of operating F35B, expected to enter operations possibly as early as 2033 LinkLinkLink South Korea participates in the Combined Task Force 151 with the expeditionary force Cheonghae Unit:
Helicopter carrier capability – two 18,800 tonne Dokdo-class amphibious assault ships
Amphibious capability – four 7,300 tonne Cheon Wang Bong-class LSTs, and four 4,300 tonne Go Jun Bong-class LSTs
Replenishment capability – one 23,000 tonne Soyang-class replenishment ship, and three 9,180 tonne Cheonji-class replenishment ships
Turkey: According to a report by Haifa University, Turkey's naval might has become a significant source of concern for the Middle East and the Balkans, as they have greatly modernized its maritime force in recent years. The study puts the Turkish Naval Forces as the strongest in the region (Middle East), and describes the Turkish navy as being a "green-water navy". According to Israeli Colonel Shlomo Guetta, one of the report's authors, Turkey is building a Navy that characterises a regional power and can conduct long-range operations. Guetta also highlighted the Turkish Navy's strike force and intervention capacity. A flagship project is the construction of TCG Anadolu, an amphibious assault ship that can serve as a light aircraft carrier. Quoting US military expert Richard Parley's estimates, the report argued that the new warship will offer Turkey unprecedented strike capabilities in the Black Sea and Eastern Mediterranean. The Turkish Navy, as of 2021, has a total of 156 naval assets, but Turkey plans to add a total of 24 new ships, which include four frigates, before the Republic reaches the 100th anniversary of its founding in 2023:
Amphibious/carrier capability – 24,660 tonne TCG Anadolu.
Amphibious capability – four 7,370 tonne Bayraktar-class tank landing ship, and the 3,773 tonne TCG Osman Gazi.
Replenishment capability – two 19,350 tonne Akar-class replenishment oiler.
Iran: Recently Iran has tried to expand its naval presence out of its own territorial waters by building new indigenous warships like Mowj-class frigates. Iran also participates in joint naval exercises with countries like Russia, China and India. The Iranian navy mostly operates in the Persian Gulf, Gulf of Oman, Indian Ocean, Red Sea, Caspian Sea, and the Mediterranean and has a fleet of 9 frigates (2 under construction), 17 corvettes and 35 conventional submarines (2 under construction).
Additionally, Iran has a second navy branch, The IRGC-N. Naval branch of IRGC mostly operates land-based cruise missiles and speedboats each carrying a variety of weapons, from anti-ship missiles to torpedoes and even rockets. This is suitable for the mission this force has, protecting local waters in Persian Gulf, Gulf of Oman, and the Caspian Sea. Though this force expanded its arsenal by building missile corvettes and forward base ships, Like 4 Shahid Soleimani-class double hulled ships (1 under construction)to operate much further than Iranian local waters:
Amphibious capability – Four 2,581 tonne Hengam-class amphibious landing ships.
Carrier/replenishment capability – 120,000 tonne IRIS Makran, 12,000 tonne Shahid Roudaki (IRGC-N), 2,100 tonne Shahid Mahdavi (IRGC-N)
Drone carrier capability – 36,000 tonne Shahid Bagheri (IRGC-N)
Replenishment capability- two Bandar Abbas-class and four Kangan-class replenishment ships.
See also: Blue-water navy
Brown-water navy
Maritime geography
== References == |
mil_tactics_continued_pretraining.csv | Grey-zone (international relations) | Definition: Use of the term grey-zone is widespread in national security circles, but there is no universal agreement on the definition of grey-zone, or even whether it is a useful term, with views about the term ranging from "faddish" or "vague", to "useful" or "brilliant".
The grey-zone is defined as "competitive interactions among and within state and non-state actors that fall between the traditional war and peace duality." by the United States Special Operations Command. A key element of operations within the grey-zone is that they remain below the threshold of an attack which could have a legitimate conventional military response (jus ad bellum). One paper defined it as "coercive statecraft actions short of war", and a "mainly non-military domain of human activity in which states use national resources to deliberately coerce other states". The Center for Strategic and International Studies defines the grey-zone as "the contested arena somewhere between routine statecraft and open warfare." British Defence Secretary Ben Wallace called the grey-zone "that limbo land between peace and war."
Grey zone warfare generally means a middle, unclear space that exists between direct conflict and peace in international relations.
According to Vincent Cable, examples of grey-zone activities include undermining industrial value chains or oil and gas supplies, money laundering, and the use of espionage and sabotage. According to Lee Hsi-ming "gray zone conflict is characterized by using the threat of force to create fear and intimidation." US Navy admiral Samuel Paparo has termed gray zone activities "illegal, coercive, aggressive and deceptive" (ICAP) following the preferred term of Romeo Brawner Jr.
History: The term grey-zone was coined by the United States Special Operations Command and published in a 2015 white paper. The concept of the grey-zone is built on existing military strategies; however, information technology has created radical new spaces which have expanded what is possible. Modern hybrid warfare and political warfare operations primarily occur in the grey-zone.
In the late 2010s, China escalated to grey-zone warfare with Taiwan in an attempt to force unification with the smaller country. Taiwan's Coast Guard Administration has had to expand rapidly to meet the rising grey-zone challenge. China's grey-zone operations against Taiwan in the maritime domain are meant to establish presence while maintaining plausible deniability.
Concerns: It is generally believed that non-democratic states can operate more effectively in the grey-zone as they are much less limited by domestic law and regulation. It can also be very hard for democratic states to respond to grey-zone threats because their legal and military systems are geared towards seeing conflicts through the sense of war and peace with little preparation or consideration for anything in between. This can lead democratic states to either dramatically overreact or under-react when faced with a grey-zone challenge.
Relation with hybrid warfare: The concept of grey-zone conflicts or warfare is distinct from the concept of hybrid warfare, although the two are intimately linked as in the modern era states most often apply unconventional tools and hybrid techniques in the grey-zone. However many of the unconventional tools used by states in the grey-zone such as propaganda campaigns, economic pressure and the use of non-state entities do not cross over the threshold into formalized state-level aggression.
See also: Gunboat diplomacy
Proxy war
Chinese salami slicing strategy
Unrestricted Warfare
References:
Further reading: Layton, Peter (2021). China's Enduring Grey-Zone Challenge (PDF) (Online PDF). Air and Space Power Centre. ISBN 9781925062502. |
mil_tactics_continued_pretraining.csv | Ground attack aircraft | Definition and designations:
United States definition and designations: U.S. attack aircraft are currently identified by the prefix A-, as in "A-6 Intruder" and "A-10 Thunderbolt II". However, until the end of World War II the A- designation was shared between attack planes and light bombers for USAAF aircraft (as opposed to B- prefix for medium or heavy bombers). The US Navy used a separate designation system and at the time preferred to call similar aircraft scout bombers (SB) or torpedo bombers (TB or BT). For example, Douglas SBD Dauntless scout bomber was designated A-24 when used by the USAAF. It was not until 1946, when the US Navy and US Marine Corps started using the "attack" (A) designation, when it renamed BT2D Skyraider and BTM Mauler to, respectively, AD Skyraider and AM Mauler.
As with many aircraft classifications, the definition of attack aircraft is somewhat vague and has tended to change over time. Current U.S. military doctrine defines it as an aircraft which most likely performs an attack mission, more than any other kind of mission. Attack mission means, in turn, specifically tactical air-to-ground action—in other words, neither air-to-air action nor strategic bombing is considered an attack mission. In United States Navy vocabulary, the alternative designation for the same activity is a strike mission. Attack missions are principally divided into two categories: air interdiction and close air support. In the last several decades, the rise of the ubiquitous multi-role fighter has created some confusion about the difference between attack and fighter aircraft. According to the current U.S. designation system, an attack aircraft (A) is designed primarily for air-to-surface (Attack: Aircraft designed to find, attack, and destroy land or sea targets) missions (also known as "attack missions"), while a fighter category F incorporates not only aircraft designed primarily for air-to-air combat, but additionally multipurpose aircraft designed also for ground-attack missions. "F" - Fighter Aircraft were designed to intercept and destroy other aircraft or missiles. This includes multipurpose aircraft also designed for ground support missions such as interdiction and close air support. Just to mention one example amongst many, the F-111 "Aardvark" was designated F despite having only minimal air-to-air capabilities. Only a single aircraft in the USAF's current inventory bears a simple, unmixed "A" designation: the A-10 Thunderbolt II.
Other designations: British designations have included FB for fighter-bomber and more recently "G" for "Ground-attack" as in Harrier GR1 (meaning "Ground-attack/Reconnaissance, Mark 1").
Imperial Japanese Navy designation use "B" to designate carrier attack bomber such as the Nakajima B5N Type-97 bomber although these aircraft are mostly used for torpedo attack and level bombing. They also use "D" to specifically designate carrier dive bomber like the Yokosuka D4Y Suisei. However by the end of the world war II, the IJN introduced the Aichi B7A Ryusei which could performed both torpedo bombing and dive bombing rendering the "D" designation redundant.
The NATO reporting names for Soviet/Russian ground-attack aircraft at first started with "B" categorizing them as bombers, as in case of Il-10 'Beast'. But later they were usually classified as fighters ("F")—possibly because (since Sukhoi Su-7) they were similar in size and visual appearance to Soviet fighters, or were simply derivatives of such.
History:
World War I: The attack aircraft as a role was defined by its use during World War I, in support of ground forces on battlefields. Battlefield support is generally divided into close air support and battlefield air interdiction, the first requiring strict and the latter only general cooperation with friendly surface forces. Such aircraft also attacked targets in rear areas. Such missions required flying where light anti-aircraft fire was expected and operating at low altitudes to precisely identify targets. Other roles, including those of light bombers, medium bombers, dive bombers, reconnaissance, fighters, fighter-bombers, could and did perform air strikes on battlefields. All these types could significantly damage ground targets from a low level flight, either by bombing, machine guns, or both.
Attack aircraft came to diverge from bombers and fighters. While bombers could be used on a battlefield, their slower speeds made them extremely vulnerable to ground fire, as did the lighter construction of fighters. The survivability of attack aircraft was guaranteed by their speed/power, protection (i.e. armor panels) and strength of construction;
Germany was the first country to produce dedicated ground-attack aircraft (designated CL-class and J-class). They were put into use in autumn 1917, during World War I. Most notable was the Junkers J.I, which pioneered the idea of an armored "bathtub", that was both fuselage structure and protection for engine and crew. The British experimented with the Sopwith TF series (termed "trench fighters"), although these did not see combat.
The last battles of 1918 on the Western Front demonstrated that ground-attacking aircraft were a valuable component of all-arms tactics. Close support ground strafing (machine-gunning) and tactical bombing of infantry (especially when moving between trenches and along roads), machine gun posts, artillery, and supply formations was a part of the Allied armies' strength in holding German attacks and supporting Allied counter-attacks and offensives. Admittedly, the cost to the Allies was high, with the Royal Flying Corps sustaining a loss rate approaching 30% among ground-attack aircraft.
1919–1939: After World War I, it was widely believed that using aircraft against tactical targets was of little use other than in harassing and undermining enemy morale; attacking combatants was generally much more dangerous to aircrews than their targets, a problem that was continually becoming more acute with the ongoing refinement of anti-aircraft weapons. Within the range of types serving attack roles, dive bombers were increasingly being seen as more effective than aircraft designed for strafing with machine guns or cannons.
Nevertheless, during the 1920s, the US military, in particular, procured specialized "Attack" aircraft and formed dedicated units, that were trained primarily for that role. The US Army Engineering Division became involved in designing ground attack aircraft. The 1920 Boeing GA-1 was an armored twin-engine triplane for ground strafing with eight machine guns and about a ton of armor plate, and the 1922 Aeromarine PG-1 was a combined pursuit (fighter) and ground attack design with a 37mm gun. The United States Marine Corps Aviation applied close air support tactics in the Banana Wars. While they did not pioneer dive bombing tactics, Marine aviators were the first to include it in their doctrine during the United States occupation of Haiti and Nicaragua. The United States Army Air Corps was notable for its creation of a separate "A-" designation for attack types, distinct from and alongside "B-" for bomber types and "P-" for pursuit (later replaced by "F-" for fighter) aircraft. The first designated attack type to be operational with the USAAC was the Curtiss A-2 Falcon. Nevertheless, such aircraft, including the A-2's replacement, the Curtiss A-12 Shrike, were unarmored and highly vulnerable to AA fire.
The British Royal Air Force focused primarily on strategic bombing, rather than ground attack. However, like most air arms of the period it did operate attack aircraft, named Army Cooperation in RAF parlance, which included the Hawker Hector, Westland Lysander and others.
Aviation played a role in the Brazilian Constitutionalist Revolution of 1932, although both sides had few aircraft. The federal government had approximately 58 aircraft divided between the Navy and the Army, as the Air Force at this time did not constitute an independent branch. In contrast, the rebels had only two Potez 25 planes and two Waco CSO, plus a small number of private aircraft.
During the 1930s, Nazi Germany had begun to field a class of Schlacht ("battle") aircraft, such as the Henschel Hs 123. Moreover, the experiences of German Condor Legion during the Spanish Civil War, against an enemy with few fighter aircraft, changed ideas about ground attack. Though equipped with generally unsuitable designs such as the Henschel Hs 123 and cannon-armed versions of the Heinkel He 112, their armament and pilots proved that aircraft were a very effective weapon, even without bombs. This led to some support within the Luftwaffe for the creation of an aircraft dedicated to this role, resulting in tenders for a new "attack aircraft". This led to the introduction (in 1942) of a unique single-seat, twin-engine attack aircraft, the slow-moving but heavily armored and formidably armed Henschel Hs 129 Panzerknacker ("Safecracker" /"Tank Cracker").
In Japan, the Imperial Japanese Navy had developed the Aichi D3A dive bomber (based on the Heinkel He 70) and the Mitsubishi B5M light attack bomber. Both, like their US counterparts, were lightly armored types, and were critically reliant on surprise attacks and the absence of significant fighter or AA opposition.
During the Winter War, the Soviet Air Forces used the Polikarpov R-5SSS, and Polikarpov R-ZSh, as attack aircraft. |
mil_tactics_continued_pretraining.csv | Ground attack aircraft | This led to some support within the Luftwaffe for the creation of an aircraft dedicated to this role, resulting in tenders for a new "attack aircraft". This led to the introduction (in 1942) of a unique single-seat, twin-engine attack aircraft, the slow-moving but heavily armored and formidably armed Henschel Hs 129 Panzerknacker ("Safecracker" /"Tank Cracker").
In Japan, the Imperial Japanese Navy had developed the Aichi D3A dive bomber (based on the Heinkel He 70) and the Mitsubishi B5M light attack bomber. Both, like their US counterparts, were lightly armored types, and were critically reliant on surprise attacks and the absence of significant fighter or AA opposition.
During the Winter War, the Soviet Air Forces used the Polikarpov R-5SSS, and Polikarpov R-ZSh, as attack aircraft.
Perhaps the most notable attack type to emerge during the late 1930s was the Soviet Ilyushin Il-2 Sturmovik, which became the most-produced military aircraft type in history.
As World War II approached, the concept of an attack aircraft was not well defined, and various air services used many different names for widely differing types, all performing similar roles (sometimes in tandem with non-attack roles of bombers, fighters, reconnaissance and other roles.
Army co-operation
The British concept of a light aircraft mixing all the roles that required extensive communication with land forces: reconnaissance, liaison, artillery spotting, aerial supply, and, last but not least, occasional strikes on the battlefield. The concept was similar to front-line aircraft used in the World War I, which was called the CL class in the German Empire. Eventually the RAF's experience showed types such as Westland Lysander to be unacceptably vulnerable and it was replaced by faster fighter types for photoreconnaissance, and light aircraft for artillery spotting.
Light bomber
During the inter-war period, the British considered that in a future war it would be France that would be the enemy. For the light day bomber they had the Fairey Battle which originated in a 1932 specification. Designs in 1938 for a replacement were adapted as a target tug. The last British specification issued for a light bomber was B.20/40 described as a "Close Army Support Bomber" capable of dive bombing and photoreconnaissance. However, the specification was dropped before an aircraft went into production.
Dive bomber
In some air services, dive bombers did not equip ground-attack units, but were treated as a separate class. In Nazi Germany, the Luftwaffe distinguished between the Stuka (Sturzkampf-, "dive bombing") units, equipped with Junkers Ju 87 from Schlacht ("battle") units, using strafing/low-level bombing types such as the Henschel Hs 123).
Fighter-bomber
Although not a synonymous class with ground-attack aircraft, fighter-bombers were usually used for the role, and proved to excel at it, even when they were only lightly armored. The Royal Air Force and United States Army Air Forces relegated obsolescent fighters to this role, while cutting-edge fighters would serve as interceptors and establish air superiority.
The United States Navy, in distinction to the USAAF, preferred the older term "Scout-Bomber", under a "SB-" designation, such as the Curtiss SB2C Helldiver.
World War II: The Junkers Ju 87s of the German Luftwaffe became virtually synonymous with close air support during the early months of World War II. The British Commonwealth's Desert Air Force, led by Arthur Tedder, became the first Allied tactical formation to emphasize the attack role, usually in the form of single-engine Hawker Hurricane and Curtiss P-40 fighter-bombers or specialized "tank-busters", such as the Hurricane Mk IID, armed with two 40 mm Vickers S guns (notably No. 6 Squadron RAF).
At around the same time, a massive invasion by Axis forces had forced the Soviet air forces to quickly expand their army support capacity, such as the Ilyushin Il-2 Sturmovik. The women pilots known as the "Night Witches" utilised an obsolescent, wooden light trainer biplane type, the Polikarpov Po-2 and small anti-personnel bombs in "harassment bombing" attacks that proved difficult to counter.
Wartime experience showed that poorly armored and/or lightly built, pre-war types were unacceptably vulnerable, especially to fighters. Nevertheless, skilled crews could be highly successful in those types, such as the leading Stuka ace, Hans-Ulrich Rudel, who claimed 500 tanks, a battleship, a cruiser, and two destroyers in 2,300 combat missions.
The Bristol Beaufighter, based on an obsolescent RAF bomber, became a versatile twin-engine attack aircraft and served in almost every theatre of the war, in the maritime strike and ground attack roles as well as that of night fighter.
Conversely, some mid-war attack types emerged as adaptations of fighters, including several versions of the German Focke-Wulf Fw 190, the British Hawker Typhoon and the US Republic P-47 Thunderbolt. The Typhoon, which was disappointing as a fighter, due to poor high altitude performance, was very fast at low altitudes and thus became the RAF's premier ground attack fighter. It was armed with four 20mm cannon, augmented first with bombs, then rockets. Likewise the P-47 was designed and intended for use as a high altitude bomber escort, but gradually found that role filled by the North American P-51 Mustang (because of its much longer range and greater maneuverability). The P-47 was also heavier and more robust than the P-51 and regarded therefore, as an "energy fighter": ideal for high-speed dive-and-climb tactics, including strafing attacks. Its armament of eight 0.50 caliber machine guns was effective against Axis infantry and light vehicles in both Europe and the Pacific.
While machine guns and cannon were initially sufficient, the evolution of well-armored tanks required heavier weapons. To augment bombs, high explosive rockets were introduced, although these unguided projectiles were still "barely adequate" because of their inaccuracy. For the British RP3, one hit per sortie was considered acceptable. However, even a near miss with rockets could cause damage or injuries to "soft targets," and patrols by Allied rocket-armed aircraft over Normandy disrupted or even completely paralyzed German road traffic. They also affected morale, because even the prospect of a rocket attack was unnerving.
The ultimate development of the cannon-armed light attack aircraft was the small production run in 1944 of the Henschel Hs 129B-3, armed with a modified PAK 40 75 mm anti-tank gun. This weapon, the Bordkanone BK 7,5, was the most powerful forward-firing weapon fitted to a production military aircraft during World War II. The only other aircraft to be factory-equipped with similar guns were the 1,420 maritime strike variants of the North American B-25 MitchellG/H, which mounted either a M4 cannon, or light-weight T13E1 or M5 versions of the same gun. These weapons, however, were hand-loaded, had shorter barrels and/or a lower muzzle velocity than the BK 7,5 and, therefore, poorer armor penetration, accuracy and rate of fire. (Except for versions of the Piaggio P.108 armed with a 102mm anti-ship cannon, The BK 7,5 was unsurpassed as an aircraft-fitted gun until 1971, when the four-engine Lockheed AC-130E Spectre; equipped with a 105 mm M102 howitzer, entered service with the US Air Force.)
Post-World War II: In the immediate post war era the piston-engined ground-attack aircraft remained useful since all of the early jets lacked endurance due to the fuel consumption rates of the jet engines. The higher powered piston engine types that had been too late for World War II were still capable of holding their own against the jets as they were able to both out accelerate and out maneuver the jets. The Royal Navy Hawker Sea Fury fighters and the U.S. Vought F4U Corsair and Douglas A-1 Skyraider were operated during the Korean War while the latter continued to be used throughout the Vietnam War.
Many post-World War II era air forces have been reluctant to adopt fixed-wing jet aircraft developed specifically for ground attack. Although close air support and interdiction remain crucial to the modern battlefield, attack aircraft are less glamorous than fighters, while air force pilots and military planners have a certain well-cultivated contempt for "mud-movers". More practically, the cost of operating a specialized ground-attack aircraft is harder to justify when compared with multirole combat aircraft. Jet attack aircraft were designed and employed during the Cold War era, such as the carrier-based nuclear strike Douglas A-3 Skywarrior and North American A-5 Vigilante, while the Grumman A-6 Intruder, F-105 Thunderchief, F-111, F-117 Nighthawk, LTV A-7 Corsair II, Sukhoi Su-25, A-10 Thunderbolt II, Panavia Tornado, AMX, Dassault Étendard, Super Étendard and others were designed specifically for ground-attack, strike, close support and anti-armor work, with little or no air-to-air capability. |
mil_tactics_continued_pretraining.csv | Ground attack aircraft | More practically, the cost of operating a specialized ground-attack aircraft is harder to justify when compared with multirole combat aircraft. Jet attack aircraft were designed and employed during the Cold War era, such as the carrier-based nuclear strike Douglas A-3 Skywarrior and North American A-5 Vigilante, while the Grumman A-6 Intruder, F-105 Thunderchief, F-111, F-117 Nighthawk, LTV A-7 Corsair II, Sukhoi Su-25, A-10 Thunderbolt II, Panavia Tornado, AMX, Dassault Étendard, Super Étendard and others were designed specifically for ground-attack, strike, close support and anti-armor work, with little or no air-to-air capability.
Ground attack has increasingly become a task of converted trainers, like the BAE Systems Hawk or Aero L-39 Albatros, and many trainers are built with this task in mind, like the CASA C-101 or the Aermacchi MB-339. Such counter-insurgency aircraft are popular with air forces which cannot afford to purchase more expensive multirole aircraft, or do not wish to risk the few such aircraft they have on light ground attack missions. A proliferation of low intensity conflicts in the post-World War II era has also expanded need for these types of aircraft to conduct counter-insurgency and light ground attack operations.
A primary distinction of post-World War II aviation between the U.S. Army and the U.S. Air Force was that latter had generally been allocated all fixed-wing aircraft, while helicopters were under control of the former; this was governed by the 1948 Key West Agreement. The Army, wishing to have its own resources to support its troops in combat and faced with a lack of Air Force enthusiasm for the ground-attack role, developed the dedicated attack helicopter.
Recent history: On 17 January 1991, Task Force Normandy began its attack on two Iraqi anti-aircraft missile sites. TF Normandy, under the command of LTC Richard A. "Dick" Cody, consisted of nine AH-64 Apaches, one UH-60 Black Hawk and four Air Force MH-53J Pave Low helicopters. The purpose of this mission was to create a safe corridor through the Iraqi air defense system. The attack was a huge success and cleared the way for the beginning of the Allied bombing campaign of Operation Desert Storm.
One concern involving the Apache arose when a unit of these helicopters was very slow to deploy during U.S. military involvement in Kosovo. According to the Army Times, the Army is shifting its doctrine to favor ground-attack aircraft over attack helicopters for deep strike attack missions because ground-attack helicopters have proved to be highly vulnerable to small-arms fire; the U.S. Marine Corps has noted similar problems.
In the late 1960s the United States Air Force requested a dedicated close air support (CAS) plane that became the Fairchild Republic A-10 Thunderbolt II. The A-10 was originally conceived as an anti-armor weapon (the A-X program requirements specifically called for an aircraft mounting a large rotary cannon to destroy massed Warsaw Pact armored forces) with limited secondary capability in the interdiction and tactical bombing roles. Today it remains the only dedicated fixed-wing ground-attack aircraft in any U.S. military service. Overall U.S. experience in the Gulf War, Kosovo War, Afghanistan War, and Iraq War has resulted in renewed interest in such aircraft. The U.S. Air Force is currently researching a replacement for the A-10 and started the OA-X program to procure a light attack aircraft.
The Soviets' similar Sukhoi Su-25 (Frogfoot) found success in the "flying artillery" role with many air forces.
The UK has completely retired the BAE Harrier II in 2011, and the Panavia Tornado dedicated attack-reconnaissance aircraft in 2019. It obtained the F-35 in 2018 and it retains its fleet of Eurofighter Typhoon multirole fighters.
See also: Air-to-ground weaponry
Gunship
Interdictor
List of attack aircraft
Pace-Finletter MOU 1952
References:
Citations:
Sources:
External links: Media related to Attack aircraft at Wikimedia Commons |
mil_tactics_continued_pretraining.csv | Guerrilla warfare | Etymology: The Spanish word guerrilla is the diminutive form of guerra ("war"); hence, "little war". The term became popular during the early-19th century Peninsular War, when, after the defeat of their regular armies, the Spanish and Portuguese people successfully rose against the Napoleonic troops and defeated a highly superior army using the guerrilla strategy in combination with a scorched earth policy and people's war (see also attrition warfare against Napoleon). In correct Spanish usage, a person who is a member of a guerrilla unit is a guerrillero ([geriˈʎeɾo]) if male, or a guerrillera ([geriˈʎeɾa]) if female. Arthur Wellesley adopted the term "guerrilla" into English from Spanish usage in 1809, to refer to the individual fighters (e.g., "I have recommended to set the Guerrillas to work"), and also (as in Spanish) to denote a group or band of such fighters. However, in most languages guerrilla still denotes a specific style of warfare. The use of the diminutive evokes the differences in number, scale, and scope between the guerrilla army and the formal, professional army of the state.
History: Prehistoric tribal warriors presumably employed guerrilla-style tactics against enemy tribes:
Primitive (and guerrilla) warfare consists of war stripped to its essentials: the murder of enemies; the theft or destruction of their sustenance, wealth, and essential resources; and the inducement in them of insecurity and terror. It conducts the basic business of war without recourse to ponderous formations or equipment, complicated maneuvers, strict chains of command, calculated strategies, timetables, or other civilized embellishments.
Evidence of conventional warfare, on the other hand, did not emerge until 3100 BC in Egypt and Mesopotamia. The Chinese general and strategist Sun Tzu, in his The Art of War (6th century BC), became one of the earliest to propose the use of guerrilla warfare. This inspired developments in modern guerrilla warfare.
In the 3rd century BC, Quintus Fabius Maximus Verrucosus, used elements of guerrilla warfare, such as the evasion of battle, the attempt to wear down the enemy, to attack small detachments in an ambush and devised the Fabian strategy, which the Roman Republic used to great effect against Hannibal's army, see also His Excellency : George Washington: the Fabian choice. The Roman general Quintus Sertorius is also noted for his skillful use of guerrilla warfare during his revolt against the Roman Senate.
In the medieval Roman Empire, guerrilla warfare was frequently practiced between the eighth through tenth centuries along the eastern frontier with the Umayyad and then Abbasid caliphates. Tactics involved a heavy emphasis on reconnaissance and intelligence, shadowing the enemy, evacuating threatened population centres, and attacking when the enemy dispersed to raid. In the later tenth century this form of warfare was codified in a military manual known by its later Latin name as De velitatione bellica ('On Skirmishing') so it would not be forgotten in the future.
The Normans often made many forays into Wales, where the Welsh used the mountainous region, which the Normans were unfamiliar with, to spring surprise attacks upon them.
Since the Enlightenment, ideologies such as nationalism, liberalism, socialism, and religious fundamentalism have played an important role in shaping insurgencies and guerrilla warfare.
In the 17th century, Chatrapati Shivaji Maharaj, founder of the Maratha Kingdom, pioneered the Shiva sutra or Ganimi Kava (Guerrilla Tactics) to defeat the many times larger and more powerful armies of the Mughal Empire.
Kerala Varma (Pazhassi Raja) (1753–1805) used guerrilla techniques chiefly centred in mountain forests in the Cotiote War against the British East India Company in India between 1793 and 1806. Arthur Wellesley (in India 1797–1805) had commanded forces assigned to defeat Pazhassi's techniques but failed. It was the longest war waged by East India Company during their military campaigns on the Indian subcontinent. It was one of the bloodiest and hardest wars waged by East India Company in India with Presidency army regiments that suffered losses as high as eighty percent in 10 years of warfare.
The Dominican Restoration War was a guerrilla war between 1863 and 1865 in the Dominican Republic between nationalists and Spain, the latter of which had recolonized the country 17 years after its independence. The war resulted in the withdrawal of Spanish forces and the establishment of a second republic in the Dominican Republic.
The Moroccan military leader Abd el-Krim (c. 1883 – 1963) and his father unified the Moroccan tribes under their control and took up arms against the Spanish and French occupiers during the Rif War in 1920. For the first time in history, tunnel warfare was used alongside modern guerrilla tactics, which caused considerable damage to both the colonial armies in Morocco.
In the early 20th century Michael Collins and Tom Barry both developed many tactical features of guerrilla warfare during the guerrilla phase of the 1919–1921 Irish War of Independence. Collins developed mainly urban guerrilla warfare tactics in Dublin City (the Irish capital). Operations in which small Irish Republican Army (IRA) units (3 to 6 guerrillas) quickly attacked a target and then disappeared into civilian crowds frustrated the British enemy. The best example of this occurred on Bloody Sunday (21 November 1920), when Collins's assassination unit, known as "The Squad", wiped out a group of British intelligence agents ("the Cairo Gang") early in the morning (14 were killed, six were wounded) – some regular officers were also killed in the purge. That afternoon, the Royal Irish Constabulary force consisting of both regular RIC personnel and the Auxiliary Division took revenge, shooting into a crowd at a football match in Croke Park, killing fourteen civilians and injuring 60 others.
In West County Cork, Tom Barry was the commander of the IRA West Cork brigade. Fighting in west Cork was rural, and the IRA fought in much larger units than their fellows in urban areas. These units, called "flying columns", engaged British forces in large battles, usually for between 10 – 30 minutes. The Kilmichael Ambush in November 1920 and the Crossbarry Ambush in March 1921 are the most famous examples of Barry's flying columns causing large casualties to enemy forces.
The Algerian Revolution of 1954 started with a handful of Algerian guerrillas. Primitively armed, the guerrillas fought the French for over eight years. This remains a prototype for modern insurgency and counterinsurgency, terrorism, torture, and asymmetric warfare prevalent throughout the world today. In South Africa, African National Congress (ANC) members studied the Algerian War, prior to the release and apotheosis of Nelson Mandela; in their intifada against Israel, Palestinian fighters have sought to emulate it. Additionally, the tactics of Al-Qaeda closely resemble those of the Algerians.
The Mukti Bahini (Bengali: মুক্তিবাহিনী, translates as "freedom fighters", or liberation army), also known as the Bangladesh Forces, was the guerrilla resistance movement consisting of the Bangladeshi military, paramilitary and civilians during the Bangladesh Liberation War that transformed East Pakistan into Bangladesh in 1971. An earlier name Mukti Fauj was also used.
Theoretical works: The growth of guerrilla warfare was inspired in part by theoretical works on guerrilla warfare, starting with the Manual de Guerra de Guerrillas by Matías Ramón Mella written in the 19th century:...our troops should...fight while protected by the terrain...using small, mobile guerrilla units to exhaust the enemy...denying them rest so that they only control the terrain under their feet.
More recently, Mao Zedong's On Guerrilla Warfare, Che Guevara's Guerrilla Warfare, and Lenin's Guerrilla warfare, were all written after the successful revolutions carried out by them in China, Cuba and Russia, respectively. Those texts characterized the tactic of guerrilla warfare as, according to Che Guevara's text, being "used by the side which is supported by a majority but which possesses a much smaller number of arms for use in defense against oppression".
Foco theory: Why does the guerrilla fighter fight? We must come to the inevitable conclusion that the guerrilla fighter is a social reformer, that he takes up arms responding to the angry protest of the people against their oppressors, and that he fights in order to change the social system that keeps all his unarmed brothers in ignominy and misery.
In the 1960s, the Marxist revolutionary Che Guevara developed the foco (Spanish: foquismo) theory of revolution in his book Guerrilla Warfare, based on his experiences during the 1959 Cuban Revolution. This theory was later formalized as "focal-ism" by Régis Debray. Its central principle is that vanguardism by cadres of small, fast-moving paramilitary groups can provide a focus for popular discontent against a sitting regime, and thereby lead a general insurrection. Although the original approach was to mobilize and launch attacks from rural areas, many foco ideas were adapted into urban guerrilla warfare movements.
Strategy, tactics and methods:
Strategy: Guerrilla warfare is a type of asymmetric warfare: competition between opponents of unequal strength. |
mil_tactics_continued_pretraining.csv | Guerrilla warfare | In the 1960s, the Marxist revolutionary Che Guevara developed the foco (Spanish: foquismo) theory of revolution in his book Guerrilla Warfare, based on his experiences during the 1959 Cuban Revolution. This theory was later formalized as "focal-ism" by Régis Debray. Its central principle is that vanguardism by cadres of small, fast-moving paramilitary groups can provide a focus for popular discontent against a sitting regime, and thereby lead a general insurrection. Although the original approach was to mobilize and launch attacks from rural areas, many foco ideas were adapted into urban guerrilla warfare movements.
Strategy, tactics and methods:
Strategy: Guerrilla warfare is a type of asymmetric warfare: competition between opponents of unequal strength. It is also a type of irregular warfare: that is, it aims not simply to defeat an invading enemy, but to win popular support and political influence, to the enemy's cost. Accordingly, guerrilla strategy aims to magnify the impact of a small, mobile force on a larger, more cumbersome one. If successful, guerrillas weaken their enemy by attrition, eventually forcing them to withdraw.
Tactics: Tactically, guerrillas usually avoid confrontation with large units and formations of enemy troops but seek and attack small groups of enemy personnel and resources to gradually deplete the opposing force while minimizing their own losses. The guerrilla prizes mobility, secrecy, and surprise, organizing in small units and taking advantage of terrain that is difficult for larger units to use. For example, Mao Zedong summarized basic guerrilla tactics at the beginning of the Chinese Civil War as:"The enemy advances, we retreat; the enemy camps, we harass; the enemy tires, we attack; the enemy retreats, we pursue." At least one author credits the ancient Chinese work The Art of War with inspiring Mao's tactics. In the 20th century, other communist leaders, including North Vietnamese Ho Chi Minh, often used and developed guerrilla warfare tactics, which provided a model for their use elsewhere, leading to the Cuban "foco" theory and the anti-Soviet Mujahadeen in Afghanistan.
Unconventional methods: Guerrilla groups may use improvised explosive devices and logistical support by the local population. The opposing army may come at last to suspect all civilians as potential guerrilla backers. The guerrillas might get political support from foreign backers and many guerrilla groups are adept at public persuasion through propaganda and use of force. Some guerrilla movements today also rely heavily on children as combatants, scouts, porters, spies, informants, and in other roles. Many governments and states also recruit children within their armed forces.
Comparison of guerrilla warfare and terrorism: No commonly accepted definition of "terrorism" has attained clear consensus. The term "terrorism" is often used as political propaganda by belligerents (most often by governments in power) to denounce opponents whose status as terrorists is disputed.
While the primary concern of guerrillas is the enemy's active military units, actual terrorists largely are concerned with non-military agents and target mostly civilians.
See also:
Notes:
References: Asprey, Robert Brown (2023). "guerrilla warfare". Entry within britannica.
Boeke, Sergei (2019). "Al-Qaeda in the Islamic Maghreb". International Relations. Oxford University Press. doi:10.1093/obo/9780199743292-0267. ISBN 978-0-19-974329-2. Retrieved 17 July 2021.
Boot, Max (2013). Invisible Armies: An Epic History of Guerrilla Warfare from Ancient Times to the Present. Liveright. pp. 10–11, 55. ISBN 978-0-87140-424-4.
Chamberlin, Paul Thomas (2015). The global offensive : the United States, the Palestine Liberation Organization, and the making of the post-cold war order. Oxford University Press. ISBN 978-0-19-021782-2. OCLC 907783262.
Child Soldiers International (2012). "Louder than words: An agenda for action to end state use of child soldiers". Archived from the original on 8 March 2019. Retrieved 19 January 2018.
Child Soldiers International (2016). "A law unto themselves? Confronting the recruitment of children by armed groups". Archived from the original on 8 March 2019. Retrieved 19 January 2018.
Creveld, Martin van (2000). "Technology and War II:Postmodern War?". In Charles Townshend (ed.). The Oxford History of Modern War. New York, USA: Oxford University Press. pp. 356–358. ISBN 978-0-19-285373-8.
Dennis, George (1985). Three Byzantine Military Treatises. Washington, D.C.: Dumbarton Oaks. p. 147.
Detsch, J (2017). "Pentagon braces for Islamic State insurgency after Mosul". Al-Monitor. Archived from the original on 12 July 2017. Retrieved 24 January 2018.
Drew, Allison (2015). "Visions of liberation: the Algerian war of independence and its South African reverberations". Review of African Political Economy. 42 (143): 22–43. doi:10.1080/03056244.2014.1000288. hdl:10.1080/03056244.2014.1000288. ISSN 0305-6244. S2CID 144545186.
Duff, James Grant (2014). The History Of The Mahrattas. Pickle Partners Publishing. p. 376. ISBN 9781782892335.
Ellis, Joseph J. (2005). His Excellency : George Washington. New York: Vintage Books. pp. 92–109. ISBN 9781400032532 – via Internet Archive.
Emmerson, B (2016). "Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism" (PDF). www.un.org. Retrieved 24 January 2018.
etymonline (2023). "guerrilla". Origin and meaning of guerrilla by Online Etymology Dictionary. Retrieved 14 July 2023.
Ferriter, Diarmaid (2020). "Diarmaid Ferriter: Bloody Sunday 1920 changed British attitudes to Ireland". The Irish Times.
Guevara, Ernesto Che (2006). Guerrilla Warfare – via Internet Archive.
Halibozek, Edward P.; Jones, Andy; Kovacich, Gerald L. (2008). The corporate security professional's handbook on terrorism (illustrated ed.). Elsevier (Butterworth-Heinemann). pp. 4–5. ISBN 978-0-7506-8257-2. Retrieved 17 December 2016.
Hanhimäki, Jussi M.; Blumenau, Bernhard; Rapaport, David (2013). "The Four Waves of Modern Terrorism" (PDF). An International History of Terrorism: Western and Non-Western Experiences. Routledge. pp. 46–73. ISBN 9780415635417. Archived from the original (PDF) on 21 February 2014.
historyireland (2003). "Bloody Sunday 1920: new evidence".
Hooper, Nicholas; Bennett, Matthew (1996). Cambridge illustrated atlas, warfare : the Middle Ages, 768-1487. Cambridge University Press. ISBN 978-0-521-44049-3 – via Internet Archive.
Horne, Alistair (2022). "A Savage War of Peace: Algeria 1954–1962. Rev. ed". The SHAFR Guide Online. doi:10.1163/2468-1733_shafr_sim220070002. Retrieved 17 July 2023.
islamicus (2023). "ABD EL-KRIM". Islamicus. Archived from the original on 14 June 2023.
Keeley, Lawrence H. (1997). War Before Civilization. Oxford University Press.
Kruijt, Dirk; Tristán, Eduardo Rey; Álvarez, Alberto Martín (2019). Latin American Guerrilla Movements: Origins, Evolution, Outcomes. Routledge. ISBN 9780429534270.
Laqueur, Walter (1977). Guerrilla : a historical and critical study. Weidenfeld and Nicolson. ISBN 9780297771845 – via Internet archive.
Lenin, V. I. (1906). "Guerrilla Warfare". Archived from the original on 11 May 2023 – via Internet archive.
Leonard, Thomas M. (1989). Encyclopedia of the developing world.
Mao, Zedong (1965). Selected Works: A Single Spark Can Start a Prairie Fire. Vol. I. Foreign Languages Press – via Internet Archive. |
mil_tactics_continued_pretraining.csv | Guerrilla warfare | War Before Civilization. Oxford University Press.
Kruijt, Dirk; Tristán, Eduardo Rey; Álvarez, Alberto Martín (2019). Latin American Guerrilla Movements: Origins, Evolution, Outcomes. Routledge. ISBN 9780429534270.
Laqueur, Walter (1977). Guerrilla : a historical and critical study. Weidenfeld and Nicolson. ISBN 9780297771845 – via Internet archive.
Lenin, V. I. (1906). "Guerrilla Warfare". Archived from the original on 11 May 2023 – via Internet archive.
Leonard, Thomas M. (1989). Encyclopedia of the developing world.
Mao, Zedong (1965). Selected Works: A Single Spark Can Start a Prairie Fire. Vol. I. Foreign Languages Press – via Internet Archive.
Mao, Zedong (1989). On Guerrilla Warfare. Washington: U.S. Marine Corps – via Internet Archive.
McMahon, Lucas (2016). "De Velitatione Bellica and Byzantine Guerrilla Warfare" (PDF). The Annual of Medieval Studies at CEU. 22: 22–33. Archived (PDF) from the original on 7 August 2021.
McNeilly, Mark (2003). Sun Tzu and the Art of Modern Warfare. p. 204.
OED (2023). "guerrilla". Oxford English Dictionary.
Pons, Frank Moya (1998). The Dominican Republic: a national history. Markus Wiener Publishers. ISBN 978-1-55876-192-6. Retrieved 15 August 2011.
Rowe, P (2002). "Freedom fighters and rebels: the rules of civil war". J R Soc Med. 95 (1): 3–4. doi:10.1177/014107680209500102. PMC 1279138. PMID 11773342.
Snyder, Craig (1999). Contemporary security and strategy.
Sinclair, Samuel Justin; Antonius, Daniel (2012). The Psychology of Terrorism Fears. Oxford University Press, USA. ISBN 978-0-19-538811-4.
Tamer, Dr. Cenk (25 September 2017). "The Differences Between the Guerrilla Warfare and Terrorism".
Tomes, Robert (2004). "Relearning Counterinsurgency Warfare" (PDF). Parameters. Archived from the original (PDF) on 7 June 2010.
United Nations Secretary-General (2017). "Report of the Secretary-General: Children and armed conflict, 2017". www.un.org. Retrieved 24 January 2018.
Williamson, Myra (2009). Terrorism, war and international law: the legality of the use of force against Afghanistan in 2001. Ashgate Publishing. ISBN 978-0-7546-7403-0.
Wilson, William John (1883). History of Madras Army. Printed by E. Keys at the Govt. Press – via Internet Archive.
Attribution:
This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Flying column". Encyclopædia Britannica. Vol. 10 (11th ed.). Cambridge University Press. p. 585.
Further reading: Asprey, Robert. War in the Shadows: The Guerrilla in History
Beckett, I. F. W. (15 September 2009). Encyclopedia of Guerrilla Warfare (Hardcover). Santa Barbara, California: Abc-Clio Inc. ISBN 978-0874369298. ISBN 9780874369298
Derradji Abder-Rahmane, The Algerian Guerrilla Campaign Strategy & Tactics, Lewiston, New York: Edwin Mellen Press, 1997.
Hinckle, Warren (with Steven Chain and David Goldstein): Guerrilla-Krieg in USA (Guerrilla war in the USA), Stuttgart (Deutsche Verlagsanstalt) 1971. ISBN 3-421-01592-9
Keats, John (1990). They Fought Alone. Time Life. ISBN 0-8094-8555-9
Kreiman, Guillermo (2024). "Revolutionary days: Introducing the Latin American Guerrillas Dataset". Journal of Peace Research.
MacDonald, Peter. Giap: The Victor in Vietnam
The Heretic: the life and times of Josip Broz-Tito. 1957.
Oller, John. The Swamp Fox: How Francis Marion Saved the American Revolution. Boston: Da Capo Press, 2016. ISBN 978-0-306-82457-9.
Peers, William R.; Brelis, Dean. Behind the Burma Road: The Story of America's Most Successful Guerrilla Force. Boston: Little, Brown & Co., 1963.
Polack, Peter. Guerrilla Warfare; Kings of Revolution Casemate,ISBN 9781612006758.
Thomas Powers, "The War without End" (review of Steve Coll, Directorate S: The CIA and America's Secret Wars in Afghanistan and Pakistan, Penguin, 2018, 757 pp.), The New York Review of Books, vol. LXV, no. 7 (19 April 2018), pp. 42–43. "Forty-plus years after our failure in Vietnam, the United States is again fighting an endless war in a faraway place against a culture and a people we don't understand for political reasons that make sense in Washington, but nowhere else." (p. 43.)
Schmidt, LS. 1982. "American Involvement in the Filipino Resistance on Mindanao During the Japanese Occupation, 1942–1945" Archived 5 October 2015 at the Wayback Machine. M.S. Thesis. U.S. Army Command and General Staff College. 274 pp.
Sutherland, Daniel E. "Sideshow No Longer: A Historiographical Review of the Guerrilla War." Civil War History 46.1 (2000): 5–23; American Civil War, 1861–65
Sutherland, Daniel E. A Savage Conflict: The Decisive Role of Guerrillas in the American Civil War (U of North Carolina Press, 2009). online Archived 24 June 2018 at the Wayback Machine
Weber, Olivier, Afghan Eternity, 2002
External links:
abcNEWS: The Secret War on YouTube – Pakistani militants conduct raids in Iran
abcNEWS Exclusive: The Secret War – Deadly guerrilla raids in Iran
Insurgency Research Group – Multi-expert blog dedicated to the study of insurgency and the development of counter-insurgency policy.
Guerrilla warfare on Spartacus Educational
Encyclopædia Britannica, Guerrilla warfare
Relearning Counterinsurgency Warfare
Casebook on Insurgency and Revolutionary Warfare United States Army Special Operations Command
Counter Insurgency Jungle Warfare School (CIJWS)India |
mil_tactics_continued_pretraining.csv | Gun data computer | Variations: M1: This was used by seacoast artillery for major-caliber seacoast guns. It computed continuous firing data for a battery of two guns that were separated by not more than 1,000 feet (300 m). It utilised the same type of input data furnished by a range section with the then-current (1940) types of position-finding and fire-control equipment.
M3: This was used in conjunction with the M9 and M10 directors to compute all required firing data, i.e. azimuth, elevation and fuze time. The computations were made continuously, so that the gun was at all times correctly pointed and the fuze correctly timed for firing at any instant. The computer was mounted in the M13 or M14 director trailer.
M4: This was identical to the M3 except for some mechanisms and parts which were altered to allow for different ammunition being used.
M8: This was an electronic computer (using vacuum tube technology) built by Bell Labs and used by coast artillery with medium-caliber guns (up to 8 inches or 200 millimetres). It made the following corrections: wind, drift, Earth's rotation, muzzle velocity, air density, height of site and spot corrections.
M9: This was identical to the M8 except for some mechanisms and parts which were altered to accommodate anti-aircraft ammunition and guns.
M10: A ballistics computer, part of the M38 fire control system, for Skysweeper anti-aircraft guns.
M13: A ballistics computer for M48 tanks.
M14: A ballistics computer for M103 heavy tanks.
M15: A part of the M35 field artillery fire-control system, which included the M1 gunnery officer console and M27 power supply.
M16: A ballistics computer for M60A1 tanks.
M18: FADAC (field artillery digital automatic computer), an all-transistorized general-purpose digital computer manufactured by Amelco (Teledyne Systems, Inc.,) and North American—Autonetics. FADAC was first fielded during 1960, and was the first semiconductor-based digital electronics field-artillery computer.
M19: A ballistics computer for M60A2 tanks.
M21: A ballistics computer for M60A3 tanks.
M23: A mortar ballistics computer.
M26: A fire-control computer for AH-1 Cobra helicopters, (AH-1F).
M31: A mortar ballistics computer.
M32: A mortar ballistics computer, (handheld).
M1: A ballistics computer for M1 Abrams main battle tanks.
Systems: The Battery Computer System (BCS) AN/GYK-29 was a computer used by the United States Army for computing artillery fire mission data. It replaced the FADAC and was small enough to fit into the HMMWV combat vehicle.
The AN/GSG-10 TACFIRE (Tactical Fire) direction system automated field artillery command and control functions. It was composed of computers and remote devices such as the Variable Format Message Entry Device (VFMED), the AN/PSG-2 Digital Message Device (DMD) and the AN/TPQ-36 Firefinder field artillery target acquisition radar system linked by digital communications using existing radio and wire communications equipment. Later it also linked with the BCS which had more advanced targeting algorithms.
The last TACFIRE fielding was completed during 1987. Replacement of TACFIRE equipment began during 1994.
TACFIRE used the AN/GYK-12, a second-generation mainframe computer developed primarily by Litton Industries for Army divisional field artillery (DIVARTY) units. It had two configurations (division and battalion level) housed in mobile command shelters. Field artillery brigades also use the division configuration.
Components of the system were identified using acronyms:
CPU – Central Processing Unit
IOU – Input/Output Unit
MCMU – Mass Core Memory Unit
DDT – Digital Data Terminal
MTU – Magnetic Tape Unit
PCG – Power Converter Group
ELP – Electronic Line Printer
DPM – Digital Plotter Map
ACC – Artillery Control Console
RCMU – Remote Control Monitoring Unit
The successor to the TACFIRE system is the Advanced Field Artillery Tactical Data System (AFATDS). The AFATDS is the "Fires XXI" computer system for both tactical and technical fire control. It replaced both BCS (for technical fire solutions) and IFSAS/L-TACFIRE (for tactical fire control) systems in U.S. Field Artillery organizations, as well as in maneuver fire support elements at the battalion level and higher. As of 2009, the U.S. Army was transitioning from a version based on a Sun Microsystems SPARC computer running the Linux kernel to a version based on laptop computers running the Microsoft Windows operating system.
Surviving examples: One reason for a lack of surviving examples of early units was the use of radium on the dials. As a result they were classified as hazardous waste and were disposed of by the United States Department of Energy. Currently there is one surviving example of FADAC at the Fort Sill artillery museum.
See also: Director (military)
Fire-control system
Kerrison Predictor
List of military electronics of the United States
Mark I Fire Control Computer - US Navy system for 5-inch guns
Numerical control
Project Manager Battle Command
Rangekeeper
References: TM 9-2300 Standard Artillery and Fire Control Materiel dated 1944
TM 9-2300 Artillery Materiel and Associated Equipment. dated May 1949
ST 9-159 Handbook of Ordnance materiel dated 1968
Gun Data Computers, Coast Artillery Journal March–April 1946, pp. 45–47
External links: http://www.globalsecurity.org/military/library/report/1988/MJR.htm
http://ed-thelen.org/comp-hist/BRL61.html#TOC
modern system Archived 2011-06-17 at the Wayback Machine
https://web.archive.org/web/20110617062042/http://sill-www.army.mil/famag/1960/sep_1960/SEP_1960_PAGES_8_15.pdf
Article title
https://web.archive.org/web/20040511174351/http://combatindex.com/mil_docs/pdf/hdbk/0700/MIL-HDBK-799.pdf
https://web.archive.org/web/20110720002347/https://rdl.train.army.mil/soldierPortal/atia/adlsc/view/public/12288-1/FM/3-22.91/chap1.htm
https://web.archive.org/web/20110617062233/http://sill-www.army.mil/famag/1958/FEB_1958/FEB_1958_PAGES_32_35.pdf
Bell labs patent
http://web.mit.edu/STS.035/www/PDFs/Newell.pdf
tacfire Archived at [1]
BCS components |
mil_tactics_continued_pretraining.csv | Gunboat diplomacy | Etymology: The term "gunboat diplomacy" comes from the nineteenth-century period of imperialism, when Western powers – from Europe and the United States – would intimidate other, less powerful entities into granting concessions through a demonstration of Western superior military capabilities, usually represented by their naval assets. A coastal country negotiating with a Western power would notice that a warship or fleet of ships had appeared off its coast. The mere sight of such power almost always had a considerable effect, and it was rarely necessary for such boats to use other measures, such as demonstrations of firepower.
A notable example of gunboat diplomacy, the Don Pacifico affair in 1850, saw the British Foreign Secretary Lord Palmerston dispatch a squadron of the Royal Navy to blockade the Greek port of Piraeus in retaliation for the assault of a British subject, David Pacifico, in Athens, and the subsequent failure of the government of King Otto to compensate the Gibraltar-born (and therefore British) Pacifico.
The effectiveness of such simple demonstrations of a nation's projection of force capabilities meant that nations with naval power and command of the sea could establish military bases (for example, Diego Garcia, 1940s onwards)
and arrange economically advantageous relationships around the world. Aside from military conquest, gunboat diplomacy was the dominant way to establish new trade relationships, colonial outposts, and expansion of empire.
Peoples lacking the resources or technological innovations available to Western empires found that their own peaceable relationships were readily dismantled in the face of such pressures, and some therefore came to depend on the imperialist nations for access to raw materials or overseas markets.
Theory: Diplomat and naval thinker James Cable spelled out the nature of gunboat diplomacy in a series of works published between 1971 and 1993. In these, he defined the phenomenon as "the use or threat of limited naval force, otherwise than as an act of war, in order to secure advantage or to avert loss, either in the furtherance of an international dispute or else against foreign nationals within the territory or the jurisdiction of their own state." He further broke down the concept into four key areas:
Definitive Force: the use of gunboat diplomacy to create or remove a fait accompli.
Purposeful Force: application of naval force to change the policy or character of the target government or group.
Catalytic Force: a mechanism designed to buy a breathing space or present policy makers with an increased range of options.
Expressive Force: use of navies to send a political message. This aspect of gunboat diplomacy is undervalued and almost dismissed by Cable.
The term "gunboat" may imply naval power-projection - land-based equivalents may include military mobilisation (as in Europe in the northern-hemisphere summer of 1914), the massing of threatening bodies of troops near international borders (as practised by the German Reich in central Europe in the 1940s), or appropriately timed and situated military manoeuvres ("exercises").
Distinctions: Gunboat diplomacy contrasts with views held prior to the 18th century and influenced by Hugo Grotius, who in De jure belli ac pacis (1625) circumscribed the right to resort to force with what he described as "temperamenta".
Gunboat diplomacy is distinct from "defence diplomacy", which is understood to be the peaceful application of resources from across the spectrum of defence to achieve positive outcomes in the development of bilateral and multilateral relationships. "Military diplomacy" is a sub-set of this, tending to refer only to the role of military attachés and their associated activity. Defence diplomacy does not include military operations, but subsumes such other defence activity as international personnel exchanges, ship and aircraft visits, high-level engagement (e.g., ministers and senior defence personnel), training and exercises, security-sector reform, and bilateral military talks.
Modern contexts: Gunboat diplomacy is considered a form of hegemony. As the United States became a military power in the first decade of the 20th century, the Rooseveltian version of gunboat diplomacy, Big Stick Diplomacy, was partially superseded by dollar diplomacy: replacing the big stick with the "juicy carrot" of American private investment. However, during Woodrow Wilson's presidency, conventional gunboat diplomacy did occur, most notably in the case of the U.S. Army's occupation of Veracruz in 1914, during the Mexican Revolution.
Gunboat diplomacy in the post-Cold War world is still largely based on naval forces, owing to the U.S. Navy's overwhelming sea power. U.S. administrations have frequently changed the disposition of their major naval fleets to influence opinion in foreign capitals. More urgent diplomatic points were made by the Clinton administration in the Yugoslav wars of the 1990s (in alliance with the Blair administration) and elsewhere, using sea-launched Tomahawk missiles, and E-3 AWACS airborne surveillance aircraft in a more passive display of military presence. Henry Kissinger, during his tenure as United States Secretary of State, summed up the concept as thus: "An aircraft carrier is 100,000 tons of diplomacy."
Notable examples:
18th century: Anson's visit to Canton in 1741
19th century: Second Barbary War (1815)
Haiti indemnity controversy (1825)
Pastry War (1838–39)
Opium Wars (1840, 1856)
Paulet Affair (1843)
Don Pacifico Incident (1850)
Second Anglo-Burmese War (1852)
Opening of Japan by United States Navy Commodore Matthew C. Perry and his Black Ships (1853–54)
Paraguay expedition (1858–9)
Shimonoseki Campaign (1863–1864)
Christie Affair (1861–1865)
Shinmiyangyo in Korea (1871)
Ganghwa Island incident (1875)
Tonkin Flotilla (1883)
German East Africa (1885)
Samoan crisis (1887-1889)
Môle Saint-Nicolas affair (1889–1891)
1890 British Ultimatum
Baltimore crisis (1891)
Overthrow of the Kingdom of Hawaii (1893)
Franco-Siamese crisis of 1893
Anglo-Zanzibar War (1896)
Luders Affair (1897)
Yangtze River Patrol (1850s–1930s)
20th century: Venezuelan crisis of 1902–1903
Panama separation from Colombia
Dogger Bank Incident (1904)
Great White Fleet (1907)
Agadir Crisis (1911)
Occupation of Veracruz (1914)
Danzig crisis (1932)
First Taiwan Strait Crisis (1954–55)
Second Taiwan Strait Crisis (1958)
Operation Vantage (1961)
Operation Brother Sam (1964)
Liberation of East Pakistan (1971)
Lebanese Civil War (1983-1984)
Third Taiwan Strait Crisis (1995–96)
21st century: Spratly Islands dispute
See also: Compellence
Fleet in being
Deterrence theory
Peace through strength
Intervention (international law)
Interventionism (politics)
Police action
References:
Further reading: Arnold, Bruce Makoto (2005). Diplomacy Far Removed: A Reinterpretation of the U.S. Decision to Open Diplomatic Relations with Japan (Thesis). University of Arizona. [5]
Cable, James: Gunboat diplomacy. Political Applications of Limited Naval Forces, London 1971 (re-edited 1981 and 1994)
Graham-Yooll, Andrew. Imperial skirmishes: war and gunboat diplomacy in Latin America (2002).
Healy, D. Gunboat Diplomacy in the Wilson Era. The U.S. Navy in Haiti 1915–1916, Madison WIS 1976.
Hagan, K. J. American Gunboat Diplomacy and the Old Navy 1877–1889, Westport/London 1973.
Preston, A. and J. Major. Send a Gunboat! A study of the Gunboat and its role in British policy, 1854–1904, London 1967.
Articles
Long, D. F.: "Martial Thunder": The First Official American Armed Intervention in Asia, in: Pacific Historical Review, Vol. 42, 1973, pp. 143–162.
Willock, R.: Gunboat Diplomacy: Operations of the (British) North America and West Indies Squadron, 1875–1915, Part 2, in: American Neptune, Vol. XXVIII, 1968, pp. 85–112.
Bauer, K. J.: The "Sancala" Affair: Captain Voorhees Seizes an Argentine Squadron, in: American Neptune, Vol. XXIV, 1969, pp. 174–186
In German: Krüger, Henning: Zwischen Küstenverteidigung und Weltpolitik. Die politische Geschichte der preußischen Marine 1848 bis 1867 (Between coastal defence and world policy. The political history of the prussian navy 1848 to 1867), Bochum 2008.
Wiechmann, Gerhard: Die preußisch-deutsche Marine in Lateinamerika 1866–1914. Eine Studie deutscher Kanonenbootpolitik (The Prussian-German Navy in Latin America 1866–1914. |
mil_tactics_continued_pretraining.csv | Gunboat diplomacy | XXVIII, 1968, pp. 85–112.
Bauer, K. J.: The "Sancala" Affair: Captain Voorhees Seizes an Argentine Squadron, in: American Neptune, Vol. XXIV, 1969, pp. 174–186
In German: Krüger, Henning: Zwischen Küstenverteidigung und Weltpolitik. Die politische Geschichte der preußischen Marine 1848 bis 1867 (Between coastal defence and world policy. The political history of the prussian navy 1848 to 1867), Bochum 2008.
Wiechmann, Gerhard: Die preußisch-deutsche Marine in Lateinamerika 1866–1914. Eine Studie deutscher Kanonenbootpolitik (The Prussian-German Navy in Latin America 1866–1914. A study of German Gunboat diplomacy), Bremen 2002.
Wiechmann, Gerhard: Die Königlich Preußische Marine in Lateinamerika 1851 bis 1867. Ein Versuch deutscher Kanonenbootpolitik (The royal Prussian navy in Latin America 1851 to 1867. An attempt of German gunboat diplomacy), in: Sandra Carreras/Günther Maihold (ed.): Preußen und Lateinamerika. Im Spannungsfeld von Kommerz, Macht und Kultur, p. 105–144, Münster 2004.
Eberspächer, Cord: Die deutsche Yangtse-Patrouille. Deutsche Kanonenbootpolitik in China im Zeitalter des Imperialismus (The German Yangtse patrol. German Gunboat diplomacy in China in the age of imperialism), Bochum 2004.
N.N.: Die Vernichtung des haitianischen Rebellenkreuzers "Crete à Pierrot" durch S.M.Kbt. "Panther" (The destruction of the Haitian rebel cruiser "Crete à Pierrot" through His Majesty´s gunboat "Panther"), in: Marine-Rundschau, 13. Jahrgang, 1902, pp. 1189–1197.
Rheder: Die militärische Unternehmung S.M.S.S. "Charlotte" und "Stein" gegen Haiti im Dezember 1897 (The military enterprise of His Majesty´s schoolships "Charlotte" and "Stein" against Haiti in December 1897), in: Marine-Rundschau, 41. Jahrgang, 1937, pp. 761–765. |
mil_tactics_continued_pretraining.csv | Helicopter | Etymology: The English word helicopter is adapted from the French word hélicoptère, coined by Gustave Ponton d'Amécourt in 1861, which originates from the Greek helix (ἕλιξ), genitive helikos (ἕλῐκος), "helix, spiral, whirl, convolution" and pteron (πτερόν) "wing". In a process of rebracketing, the word is often (erroneously, from an etymological point of view) perceived by English speakers as consisting of heli- and -copter, leading to words like helipad and quadcopter. English language nicknames for "helicopter" include "chopper", "copter", "heli", and "whirlybird". In the United States military, the common slang is "helo" pronounced with a long "e".
Design characteristics: A helicopter is a type of rotorcraft in which lift and thrust are supplied by one or more horizontally-spinning rotors. By contrast the autogyro (or gyroplane) and gyrodyne have a free-spinning rotor for all or part of the flight envelope, relying on a separate thrust system to propel the craft forwards, so that the airflow sets the rotor spinning to provide lift. The compound helicopter also has a separate thrust system, but continues to supply power to the rotor throughout normal flight.
Rotor system: The rotor system, or more simply rotor, is the rotating part of a helicopter that generates lift. A rotor system may be mounted horizontally, as main rotors are, providing lift vertically, or it may be mounted vertically, such as a tail rotor, to provide horizontal thrust to counteract torque from the main rotors. The rotor consists of a mast, hub and rotor blades.
The mast is a cylindrical metal shaft that extends upwards from the transmission. At the top of the mast is the attachment point for the rotor blades called the hub. Main rotor systems are classified according to how the rotor blades are attached and move relative to the hub. There are three basic types: hingeless, fully articulated, and teetering; although some modern rotor systems use a combination of these.
Anti-torque: Most helicopters have a single main rotor, but torque created by its aerodynamic drag must be countered by an opposed torque. The design that Igor Sikorsky settled on for his VS-300 was a smaller tail rotor. The tail rotor pushes or pulls against the tail to counter the torque effect, and this has become the most common configuration for helicopter design, usually at the end of a tail boom.
Some helicopters use other anti-torque controls instead of the tail rotor, such as the ducted fan (called Fenestron or FANTAIL) and NOTAR. NOTAR provides anti-torque similar to the way a wing develops lift through the use of the Coandă effect on the tail boom.
The use of two or more horizontal rotors turning in opposite directions is another configuration used to counteract the effects of torque on the aircraft without relying on an anti-torque tail rotor. This allows the power normally required to be diverted for the tail rotor to be applied fully to the main rotors, increasing the aircraft's power efficiency and lifting capacity. There are several common configurations that use the counter-rotating effect to benefit the rotorcraft:
Tandem rotors are two counter-rotating rotors with one mounted behind the other.
Transverse rotors are pair of counter-rotating rotors transversely mounted at the ends of fixed wings or outrigger structures. Now used on tiltrotors, some early model helicopters had used them.
Coaxial rotors are two counter-rotating rotors mounted one above the other with the same axis.
Intermeshing rotors are two counter-rotating rotors mounted close to each other at a sufficient angle to let the rotors intermesh over the top of the aircraft without colliding. An aircraft utilizing this is known as a synchropter.
Multirotors make use of three or more rotors. Specific terms are also used depending on the exact amount of rotors, such as tricopter, quadcopter, hexacopter and octocopter for three rotors, four rotors, six rotors and eight rotors respectively, of which quadcopter is the most common. Multirotors are primarily used on drones and use on aircraft with a human pilot is rare.
Tip jet designs let the rotor push itself through the air and avoid generating torque.
Engines: The number, size and type of engine(s) used on a helicopter determines the size, function and capability of that helicopter design. The earliest helicopter engines were simple mechanical devices, such as rubber bands or spindles, which relegated the size of helicopters to toys and small models. For a half century before the first airplane flight, steam engines were used to forward the development of the understanding of helicopter aerodynamics, but the limited power did not allow for manned flight. The introduction of the internal combustion engine at the end of the 19th century became the watershed for helicopter development as engines began to be developed and produced that were powerful enough to allow for helicopters able to lift humans.
Early helicopter designs utilized custom-built engines or rotary engines designed for airplanes, but these were soon replaced by more powerful automobile engines and radial engines. The single, most-limiting factor of helicopter development during the first half of the 20th century was that the amount of power produced by an engine was not able to overcome the engine's weight in vertical flight. This was overcome in early successful helicopters by using the smallest engines available. When the compact, flat engine was developed, the helicopter industry found a lighter-weight powerplant easily adapted to small helicopters, although radial engines continued to be used for larger helicopters.
Turbine engines revolutionized the aviation industry; and the turboshaft engine for helicopter use, pioneered in December 1951 by the aforementioned Kaman K-225, finally gave helicopters an engine with a large amount of power and a low weight penalty. Turboshafts are also more reliable than piston engines, especially when producing the sustained high levels of power required by a helicopter. The turboshaft engine was able to be scaled to the size of the helicopter being designed, so that all but the lightest of helicopter models are powered by turbine engines today.
Special jet engines developed to drive the rotor from the rotor tips are referred to as tip jets. Tip jets powered by a remote compressor are referred to as cold tip jets, while those powered by combustion exhaust are referred to as hot tip jets. An example of a cold jet helicopter is the Sud-Ouest Djinn, and an example of the hot tip jet helicopter is the YH-32 Hornet.
Some radio-controlled helicopters and smaller, helicopter-type unmanned aerial vehicles, use electric motors or motorcycle engines. Radio-controlled helicopters may also have piston engines that use fuels other than gasoline, such as nitromethane. Some turbine engines commonly used in helicopters can also use biodiesel instead of jet fuel.
There are also human-powered helicopters.
Flight controls: A helicopter has four flight control inputs. These are the cyclic, the collective, the anti-torque pedals, and the throttle. The cyclic control is usually located between the pilot's legs and is commonly called the cyclic stick or just cyclic. On most helicopters, the cyclic is similar to a joystick. However, the Robinson R22 and Robinson R44 have a unique teetering bar cyclic control system and a few helicopters have a cyclic control that descends into the cockpit from overhead.
The control is called the cyclic because it changes cyclic pitch of the main blades. The result is to tilt the rotor disk in a particular direction, resulting in the helicopter moving in that direction. If the pilot pushes the cyclic forward, the rotor disk tilts forward, and the rotor produces a thrust in the forward direction. If the pilot pushes the cyclic to the side, the rotor disk tilts to that side and produces thrust in that direction, causing the helicopter to hover sideways.
The collective pitch control or collective is located on the left side of the pilot's seat with a settable friction control to prevent inadvertent movement. The collective changes the pitch angle of all the main rotor blades collectively (i.e. all at the same time) and independently of their position. Therefore, if a collective input is made, all the blades change equally, and the result is the helicopter increasing or decreasing in altitude.
A swashplate controls the collective and cyclic pitch of the main blades. The swashplate moves up and down, along the main shaft, to change the pitch of both blades. This causes the helicopter to push air downward or upward, depending on the angle of attack. The swashplate can also change its angle to move the blades angle forwards or backwards, or left and right, to make the helicopter move in those directions.
The anti-torque pedals are located in the same position as the rudder pedals in a fixed-wing aircraft, and serve a similar purpose, namely to control the direction in which the nose of the aircraft is pointed. Application of the pedal in a given direction changes the pitch of the tail rotor blades, increasing or reducing the thrust produced by the tail rotor and causing the nose to yaw in the direction of the applied pedal. The pedals mechanically change the pitch of the tail rotor altering the amount of thrust produced.
Helicopter rotors are designed to operate in a narrow range of RPM. The throttle controls the power produced by the engine, which is connected to the rotor by a fixed ratio transmission. |
mil_tactics_continued_pretraining.csv | Helicopter | This causes the helicopter to push air downward or upward, depending on the angle of attack. The swashplate can also change its angle to move the blades angle forwards or backwards, or left and right, to make the helicopter move in those directions.
The anti-torque pedals are located in the same position as the rudder pedals in a fixed-wing aircraft, and serve a similar purpose, namely to control the direction in which the nose of the aircraft is pointed. Application of the pedal in a given direction changes the pitch of the tail rotor blades, increasing or reducing the thrust produced by the tail rotor and causing the nose to yaw in the direction of the applied pedal. The pedals mechanically change the pitch of the tail rotor altering the amount of thrust produced.
Helicopter rotors are designed to operate in a narrow range of RPM. The throttle controls the power produced by the engine, which is connected to the rotor by a fixed ratio transmission. The purpose of the throttle is to maintain enough engine power to keep the rotor RPM within allowable limits so that the rotor produces enough lift for flight. In single-engine helicopters, the throttle control is a motorcycle-style twist grip mounted on the collective control, while dual-engine helicopters have a power lever for each engine.
Compound helicopter: A compound helicopter has an additional system for thrust and, typically, small stub fixed wings. This offloads the rotor in cruise, which allows its rotation to be slowed down, thus increasing the maximum speed of the aircraft. The Lockheed AH-56A Cheyenne diverted up to 90% of its engine power to a pusher propeller during forward flight.
Flight: There are three basic flight conditions for a helicopter: hover, forward flight and the transition between the two.
Hover: Hovering is the most challenging part of flying a helicopter. This is because a helicopter generates its own gusty air while in a hover, which acts against the fuselage and flight control surfaces. The result is constant control inputs and corrections by the pilot to keep the helicopter where it is required to be. Despite the complexity of the task, the control inputs in a hover are simple. The cyclic is used to eliminate drift in the horizontal plane, that is to control forward and back, right and left. The collective is used to maintain altitude. The pedals are used to control nose direction or heading. It is the interaction of these controls that makes hovering so difficult, since an adjustment in any one control requires an adjustment of the other two, creating a cycle of constant correction.
Transition from hover to forward flight: As a helicopter moves from hover to forward flight it enters a state called translational lift which provides extra lift without increasing power. This state, most typically, occurs when the airspeed reaches approximately 16–24 knots (30–44 km/h; 18–28 mph), and may be necessary for a helicopter to obtain flight.
Forward flight: In forward flight a helicopter's flight controls behave more like those of a fixed-wing aircraft. Applying forward pressure on the cyclic will cause the nose to pitch down, with a resultant increase in airspeed and loss of altitude. Aft cyclic will cause the nose to pitch up, slowing the helicopter and causing it to climb. Increasing collective (power) while maintaining a constant airspeed will induce a climb while decreasing collective will cause a descent. Coordinating these two inputs, down collective plus aft cyclic or up collective plus forward cyclic, will result in airspeed changes while maintaining a constant altitude. The pedals serve the same function in both a helicopter and a fixed-wing aircraft, to maintain balanced flight. This is done by applying a pedal input in whichever direction is necessary to center the ball in the turn and bank indicator.
Uses: Due to the operating characteristics of the helicopter—its ability to take off and land vertically, and to hover for extended periods of time, as well as the aircraft's handling properties under low airspeed conditions—it has proved advantageous to conduct tasks that were previously not possible with other aircraft, or were time- or work-intensive to accomplish on the ground. Today, helicopter uses include transportation of people and cargo, military uses, construction, firefighting, search and rescue, tourism, medical transport, law enforcement, agriculture, news and media, and aerial observation, among others.
A helicopter used to carry loads connected to long cables or slings is called an aerial crane. Aerial cranes are used to place heavy equipment, like radio transmission towers and large air conditioning units, on the tops of tall buildings, or when an item must be raised up in a remote area, such as a radio tower raised on the top of a hill or mountain. Helicopters are used as aerial cranes in the logging industry to lift trees out of terrain where vehicles cannot travel and where environmental concerns prohibit the building of roads. These operations are referred to as longline because of the long, single sling line used to carry the load. In military service helicopters are often useful for delivery of outsized slung loads that would not fit inside ordinary cargo aircraft: artillery pieces, large machinery (field radars, communications gear, electrical generators), or pallets of bulk cargo. In military operations these payloads are often delivered to remote locations made inaccessible by mountainous or riverine terrain, or naval vessels at sea.
In electronic news gathering, helicopters have provided aerial views of some major news stories, and have been doing so, from the late 1960s. Helicopters have also been used in films, both in front and behind the camera.
The largest single non-combat helicopter operation in history was the disaster management operation following the 1986 Chernobyl nuclear disaster. Hundreds of pilots were involved in airdrop and observation missions, making dozens of sorties a day for several months.
"Helitack" is the use of helicopters to combat wildland fires. The helicopters are used for aerial firefighting (water bombing) and may be fitted with tanks or carry helibuckets. Helibuckets, such as the Bambi bucket, are usually filled by submerging the bucket into lakes, rivers, reservoirs, or portable tanks. Tanks fitted onto helicopters are filled from a hose while the helicopter is on the ground or water is siphoned from lakes or reservoirs through a hanging snorkel as the helicopter hovers over the water source. Helitack helicopters are also used to deliver firefighters, who rappel down to inaccessible areas, and to resupply firefighters. Common firefighting helicopters include variants of the Bell 205 and the Erickson S-64 Aircrane helitanker.
Helicopters are used as air ambulances for emergency medical assistance in situations when an ambulance cannot easily or quickly reach the scene, or cannot transport the patient to a medical facility in time. Helicopters are also used when patients need to be transported between medical facilities and air transportation is the most practical method. An air ambulance helicopter is equipped to stabilize and provide limited medical treatment to a patient while in flight. The use of helicopters as air ambulances is often referred to as "MEDEVAC", and patients are referred to as being "airlifted", or "medevaced". This use was pioneered in the Korean War, when time to reach a medical facility was reduced to three hours from the eight hours needed in World War II, and further reduced to two hours by the Vietnam War. In naval service a prime function of rescue helicopters is to promptly retrieve downed aircrew involved in crashes occurring upon launch or recovery aboard aircraft carriers. In past years this function was performed by destroyers escorting the carrier, but since then helicopters have proved vastly more effective.
Police departments and other law enforcement agencies use helicopters to pursue suspects and patrol the skies. Since helicopters can achieve a unique aerial view, they are often used in conjunction with police on the ground to report on suspects' locations and movements. They are often mounted with lighting and heat-sensing equipment for night pursuits.
Military forces use attack helicopters to conduct aerial attacks on ground targets. Such helicopters are mounted with missile launchers and miniguns. Transport helicopters are used to ferry troops and supplies where the lack of an airstrip would make transport via fixed-wing aircraft impossible. The use of transport helicopters to deliver troops as an attack force on an objective is referred to as "air assault". Unmanned aerial systems (UAS) helicopter systems of varying sizes are developed by companies for military reconnaissance and surveillance duties. Naval forces also use helicopters equipped with dipping sonar for anti-submarine warfare, since they can operate from small ships.
Oil companies charter helicopters to move workers and parts quickly to remote drilling sites located at sea or in remote locations. The speed advantage over boats makes the high operating cost of helicopters cost-effective in ensuring that oil platforms continue to operate. Various companies specialize in this type of operation.
NASA developed Ingenuity, a 1.8 kg (4.0 lb) helicopter used to survey Mars (along with a rover). It began service in February 2021 and was retired due to sustained rotor blade damage in January 2024 after 73 sorties. As the Martian atmosphere is 100 times thinner than Earth's, its two blades spin at close to 3,000 revolutions a minute, approximately 10 times faster than that of a terrestrial helicopter.
Market: In 2017, 926 civil helicopters were shipped for $3.68 billion, led by Airbus Helicopters with $1.87 billion for 369 rotorcraft, Leonardo Helicopters with $806 million for 102 (first three-quarters only), Bell Helicopter with $696 million for 132, then Robinson Helicopter with $161 million for 305. |
mil_tactics_continued_pretraining.csv | Helicopter | Various companies specialize in this type of operation.
NASA developed Ingenuity, a 1.8 kg (4.0 lb) helicopter used to survey Mars (along with a rover). It began service in February 2021 and was retired due to sustained rotor blade damage in January 2024 after 73 sorties. As the Martian atmosphere is 100 times thinner than Earth's, its two blades spin at close to 3,000 revolutions a minute, approximately 10 times faster than that of a terrestrial helicopter.
Market: In 2017, 926 civil helicopters were shipped for $3.68 billion, led by Airbus Helicopters with $1.87 billion for 369 rotorcraft, Leonardo Helicopters with $806 million for 102 (first three-quarters only), Bell Helicopter with $696 million for 132, then Robinson Helicopter with $161 million for 305.
By October 2018, the in-service and stored helicopter fleet of 38,570 with civil or government operators was led Robinson Helicopter with 24.7% followed by Airbus Helicopters with 24.4%, then Bell with 20.5 and Leonardo with 8.4%, Russian Helicopters with 7.7%, Sikorsky Aircraft with 7.2%, MD Helicopters with 3.4% and other with 2.2%.
The most widespread model is the piston Robinson R44 with 5,600, then the H125/AS350 with 3,600 units, followed by the Bell 206 with 3,400.
Most were in North America with 34.3% then in Europe with 28.0% followed by Asia-Pacific with 18.6%, Latin America with 11.6%, Africa with 5.3% and Middle East with 1.7%.
History:
Early design: The earliest references for vertical flight came from China. Since around 400 BC, Chinese children have played with bamboo flying toys (or Chinese top). This bamboo-copter is spun by rolling a stick attached to a rotor. The spinning creates lift, and the toy flies when released. The 4th-century AD Daoist book Baopuzi by Ge Hong (抱朴子 "Master who Embraces Simplicity") reportedly describes some of the ideas inherent to rotary wing aircraft.
Designs similar to the Chinese helicopter toy appeared in some Renaissance paintings and other works. In the 18th and early 19th centuries Western scientists developed flying machines based on the Chinese toy.
It was not until the early 1480s, when Italian polymath Leonardo da Vinci created a design for a machine that could be described as an "aerial screw", that any recorded advancement was made towards vertical flight. His notes suggested that he built small flying models, but there were no indications for any provision to stop the rotor from making the craft rotate. As scientific knowledge increased and became more accepted, people continued to pursue the idea of vertical flight.
In July 1754, Russian Mikhail Lomonosov had developed a small coaxial modeled after the Chinese top but powered by a wound-up spring device and demonstrated it to the Russian Academy of Sciences. It was powered by a spring, and was suggested as a method to lift meteorological instruments. In 1783, Christian de Launoy, and his mechanic, Bienvenu, used a coaxial version of the Chinese top in a model consisting of contrarotating turkey flight feathers as rotor blades, and in 1784, demonstrated it to the French Academy of Sciences. Sir George Cayley, influenced by a childhood fascination with the Chinese flying top, developed a model of feathers, similar to that of Launoy and Bienvenu, but powered by rubber bands. By the end of the century, he had progressed to using sheets of tin for rotor blades and springs for power. His writings on his experiments and models would become influential on future aviation pioneers. Alphonse Pénaud would later develop coaxial rotor model helicopter toys in 1870, also powered by rubber bands. One of these toys, given as a gift by their father, would inspire the Wright brothers to pursue the dream of flight.
In 1861, the word "helicopter" was coined by Gustave de Ponton d'Amécourt, a French inventor who demonstrated a small steam-powered model. While celebrated as an innovative use of a new metal, aluminum, the model never lifted off the ground. D'Amecourt's linguistic contribution would survive to eventually describe the vertical flight he had envisioned. Steam power was popular with other inventors as well. In 1877, the Italian engineer, inventor and aeronautical pioneer Enrico Forlanini developed an unmanned helicopter powered by a steam engine. It rose to a height of 13 meters (43 feet), where it remained for 20 seconds, after a vertical take-off from a park in Milan. Milan has dedicated its city airport to Enrico Forlanini, also named Linate Airport, as well as the nearby park, the Parco Forlanini. Emmanuel Dieuaide's steam-powered design featured counter-rotating rotors powered through a hose from a boiler on the ground. In 1887 Parisian inventor, Gustave Trouvé, built and flew a tethered electric model helicopter.
In July 1901, the maiden flight of Hermann Ganswindt's helicopter took place in Berlin-Schöneberg; this was probably the first heavier-than-air motor-driven flight carrying humans. A movie covering the event was taken by Max Skladanowsky, but it remains lost.
In 1885, Thomas Edison was given US$1,000 (equivalent to $34,000 today) by James Gordon Bennett, Jr., to conduct experiments towards developing flight. Edison built a helicopter and used the paper for a stock ticker to create guncotton, with which he attempted to power an internal combustion engine. The helicopter was damaged by explosions and one of his workers was badly burned. Edison reported that it would take a motor with a ratio of three to four pounds per horsepower produced to be successful, based on his experiments. Ján Bahýľ, a Slovak inventor, adapted the internal combustion engine to power his helicopter model that reached a height of 0.5 meters (1.6 feet) in 1901. On 5 May 1905, his helicopter reached 4 meters (13 feet) in altitude and flew for over 1,500 meters (4,900 feet). In 1908, Edison patented his own design for a helicopter powered by a gasoline engine with box kites attached to a mast by cables for a rotor, but it never flew.
First flights: In 1906, two French brothers, Jacques and Louis Breguet, began experimenting with airfoils for helicopters. In 1907, those experiments resulted in the Gyroplane No.1, possibly as the earliest known example of a quadcopter. Although there is some uncertainty about the date, sometime between 14 August and 29 September 1907, the Gyroplane No. 1 lifted its pilot into the air about 0.6 metres (2 ft) for a minute. The Gyroplane No. 1 proved to be extremely unsteady and required a man at each corner of the airframe to hold it steady. For this reason, the flights of the Gyroplane No. 1 are considered to be the first manned flight of a helicopter, but not a free or untethered flight.
That same year, fellow French inventor Paul Cornu designed and built the Cornu helicopter which used two 6.1-metre (20 ft) counter-rotating rotors driven by a 24 hp (18 kW) Antoinette engine. On 13 November 1907, it lifted its inventor to 0.3 metres (1 ft) and remained aloft for 20 seconds. Even though this flight did not surpass the flight of the Gyroplane No. 1, it was reported to be the first truly free flight with a pilot. Cornu's helicopter completed a few more flights and achieved a height of nearly 2.0 metres (6.5 ft), but it proved to be unstable and was abandoned.
In 1909, J. Newton Williams of Derby, Connecticut, and Emile Berliner of Washington, D.C., flew a helicopter "on three occasions" at Berliner's lab in Washington's Brightwood neighborhood.
In 1911, Slovenian philosopher and economist Ivan Slokar patented a helicopter configuration.
The Danish inventor Jacob Ellehammer built the Ellehammer helicopter in 1912. It consisted of a frame equipped with two counter-rotating discs, each of which was fitted with six vanes around its circumference. After indoor tests, the aircraft was demonstrated outdoors and made several free take-offs. Experiments with the helicopter continued until September 1916, when it tipped over during take-off, destroying its rotors.
During World War I, Austria-Hungary developed the PKZ, an experimental helicopter prototype, with two aircraft built.
Early development: In the early 1920s, Argentine Raúl Pateras-Pescara de Castelluccio, while working in Europe, demonstrated one of the first successful applications of cyclic pitch. Coaxial, contra-rotating, biplane rotors could be warped to cyclically increase and decrease the lift they produced. The rotor hub could also be tilted forward a few degrees, allowing the aircraft to move forward without a separate propeller to push or pull it. |
mil_tactics_continued_pretraining.csv | Helicopter | It consisted of a frame equipped with two counter-rotating discs, each of which was fitted with six vanes around its circumference. After indoor tests, the aircraft was demonstrated outdoors and made several free take-offs. Experiments with the helicopter continued until September 1916, when it tipped over during take-off, destroying its rotors.
During World War I, Austria-Hungary developed the PKZ, an experimental helicopter prototype, with two aircraft built.
Early development: In the early 1920s, Argentine Raúl Pateras-Pescara de Castelluccio, while working in Europe, demonstrated one of the first successful applications of cyclic pitch. Coaxial, contra-rotating, biplane rotors could be warped to cyclically increase and decrease the lift they produced. The rotor hub could also be tilted forward a few degrees, allowing the aircraft to move forward without a separate propeller to push or pull it. Pateras-Pescara was also able to demonstrate the principle of autorotation. By January 1924, Pescara's helicopter No. 1 was tested but was found to be underpowered and could not lift its own weight. His 2F fared better and set a record. The British government funded further research by Pescara which resulted in helicopter No. 3, powered by a 250-horsepower (190 kW) radial engine which could fly for up to ten minutes.
In March 1923 Time magazine reported Thomas Edison sent George de Bothezat a congratulations for a successful helicopter test flight. Edison wrote, "So far as I know, you have produced the first successful helicopter." The helicopter was tested at McCook's Field and remained airborne for 2 minutes and 45 seconds at a height of 15 feet.
On 14 April 1924, Frenchman Étienne Oehmichen set the first helicopter world record recognized by the Fédération Aéronautique Internationale (FAI), flying his quadrotor helicopter 360 meters (1,180 ft). On 18 April 1924, Pescara beat Oemichen's record, flying for a distance of 736 meters (2,415 ft) (nearly 0.80 kilometers or .5 miles) in 4 minutes and 11 seconds (about 13 km/h or 8 mph), maintaining a height of 1.8 meters (6 feet). On 4 May, Oehmichen completed the first one-kilometer (0.62 mi) closed-circuit helicopter flight in 7 minutes 40 seconds with his No. 2 machine.
In the US, George de Bothezat built the quadrotor helicopter de Bothezat helicopter for the United States Army Air Service but the Army cancelled the program in 1924, and the aircraft was scrapped.
Albert Gillis von Baumhauer, a Dutch aeronautical engineer, began studying rotorcraft design in 1923. His first prototype "flew" ("hopped" and hovered in reality) on 24 September 1925, with Dutch Army-Air arm Captain Floris Albert van Heijst at the controls. The controls that van Heijst used were von Baumhauer's inventions, the cyclic and collective. Patents were granted to von Baumhauer for his cyclic and collective controls by the British ministry of aviation on 31 January 1927, under patent number 265,272.
In 1927, Engelbert Zaschka from Germany built a helicopter, equipped with two rotors, in which a gyroscope was used to increase stability and serves as an energy accumulator for a gliding flight to make a landing. Zaschka's aircraft, the first helicopter, which ever worked so successfully in miniature, not only rises and descends vertically, but is able to remain stationary at any height.
In 1928, Hungarian aviation engineer Oszkár Asbóth constructed a helicopter prototype that took off and landed at least 182 times, with a maximum single flight duration of 53 minutes.
In 1930, the Italian engineer Corradino D'Ascanio built his D'AT3, a coaxial helicopter. His relatively large machine had two, two-bladed, counter-rotating rotors. Control was achieved by using auxiliary wings or servo-tabs on the trailing edges of the blades, a concept that was later adopted by other helicopter designers, including Bleeker and Kaman. Three small propellers mounted to the airframe were used for additional pitch, roll, and yaw control. The D'AT3 held modest FAI speed and altitude records for the time, including altitude (18 m or 59 ft), duration (8 minutes 45 seconds) and distance flown (1,078 m or 3,540 ft).
First practical rotorcraft: Spanish aeronautical engineer and pilot Juan de la Cierva invented the autogyro in the early 1920s, becoming the first practical rotorcraft. In 1928, de la Cierva successfully flew an autogyro across the English Channel, from London to Paris. In 1934, an autogyro became the first rotorcraft to successfully take off and land on the deck of a ship. That same year, the autogyro was employed by the Spanish military during the Asturias revolt, becoming the first military deployment of a rotocraft. Autogyros were also employed in New Jersey and Pennsylvania for delivering mail and newspapers prior to the invention of the helicopter. Though lacking true vertical flight capability, work on the autogyro forms the basis for helicopter analysis.
Single lift-rotor success: In the Soviet Union, Boris N. Yuriev and Alexei M. Cheremukhin, two aeronautical engineers working at the Tsentralniy Aerogidrodinamicheskiy Institut (TsAGI or the Central Aerohydrodynamic Institute), constructed and flew the TsAGI 1-EA single lift-rotor helicopter, which used an open tubing framework, a four-blade main lift rotor, and twin sets of 1.8-meter (5.9-foot) diameter, two-bladed anti-torque rotors: one set of two at the nose and one set of two at the tail. Powered by two M-2 powerplants, up-rated copies of the Gnome Monosoupape 9 Type B-2 100 CV output rotary engine of World War I, the TsAGI 1-EA made several low altitude flights. By 14 August 1932, Cheremukhin managed to get the 1-EA up to an unofficial altitude of 605 meters (1,985 feet), shattering d'Ascanio's earlier achievement. As the Soviet Union was not yet a member of the FAI, however, Cheremukhin's record remained unrecognized.
Nicolas Florine, a Russian engineer, built the first twin tandem rotor machine to perform a free flight. It flew in Sint-Genesius-Rode, at the Laboratoire Aérotechnique de Belgique (now von Karman Institute) in April 1933, and attained an altitude of six meters (20 feet) and an endurance of eight minutes. Florine chose a co-rotating configuration because the gyroscopic stability of the rotors would not cancel. Therefore, the rotors had to be tilted slightly in opposite directions to counter torque. Using hingeless rotors and co-rotation also minimised the stress on the hull. At the time, it was one of the most stable helicopters in existence.
The Bréguet-Dorand Gyroplane Laboratoire was built in 1933. It was a coaxial helicopter, contra-rotating. After many ground tests and an accident, it first took flight on 26 June 1935. Within a short time, the aircraft was setting records with pilot Maurice Claisse at the controls. On 14 December 1935, he set a record for closed-circuit flight with a 500-meter (1,600-foot) diameter. The next year, on 26 September 1936, Claisse set a height record of 158 meters (518 feet). And, finally, on 24 November 1936, he set a flight duration record of one hour, two minutes and 50 seconds over a 44 kilometers (27 miles) closed circuit at 44.7 kilometres per hour (27.8 mph). The aircraft was destroyed in 1943 by an Allied airstrike at Villacoublay airport.
American single-rotor beginnings: American inventor Arthur M. Young started work on model helicopters in 1928 using converted electric hover motors to drive the rotor head. Young invented the stabilizer bar and patented it shortly after. A mutual friend introduced Young to Lawrence Dale, who once seeing his work asked him to join the Bell Aircraft company. When Young arrived at Bell in 1941, he signed his patent over and began work on the helicopter. His budget was US$250,000 (equivalent to $5.2 million today) to build two working helicopters. In just six months they completed the first Bell Model 1, which spawned the Bell Model 30, later succeeded by the Bell 47.
Birth of an industry: Heinrich Focke at Focke-Wulf had purchased a license from Cierva Autogiro Company, which according to Frank Kingston Smith Sr., included "the fully controllable cyclic/collective pitch hub system". |
mil_tactics_continued_pretraining.csv | Helicopter | American single-rotor beginnings: American inventor Arthur M. Young started work on model helicopters in 1928 using converted electric hover motors to drive the rotor head. Young invented the stabilizer bar and patented it shortly after. A mutual friend introduced Young to Lawrence Dale, who once seeing his work asked him to join the Bell Aircraft company. When Young arrived at Bell in 1941, he signed his patent over and began work on the helicopter. His budget was US$250,000 (equivalent to $5.2 million today) to build two working helicopters. In just six months they completed the first Bell Model 1, which spawned the Bell Model 30, later succeeded by the Bell 47.
Birth of an industry: Heinrich Focke at Focke-Wulf had purchased a license from Cierva Autogiro Company, which according to Frank Kingston Smith Sr., included "the fully controllable cyclic/collective pitch hub system". In return, Cierva Autogiro received a cross-license to build the Focke-Achgelis helicopters. Focke designed the world's first practical helicopter, the transverse twin-rotor Focke-Wulf Fw 61, which first flew in June 1936. It was demonstrated by Hanna Reitsch in February 1938 inside the Deutschlandhalle in Berlin. The Fw 61 set a number of FAI records from 1937 to 1939, including: maximum altitude of 3,427 metres (11,243 ft), maximum distance of 230 kilometres (140 mi), and maximum speed of 124 kilometres per hour (77 mph). Autogiro development was now being bypassed by a focus on helicopters.
During World War II, Nazi Germany used helicopters in small numbers for observation, transport, and medical evacuation. The Flettner Fl 282 Kolibri synchropter—using the same basic configuration as Anton Flettner's own pioneering Fl 265—was used in the Baltic, Mediterranean, and Aegean Seas. The Focke-Achgelis Fa 223 Drache, like the Fw 61, used two transverse rotors, and was the largest rotorcraft of the war. Extensive bombing by the Allied forces prevented Germany from producing helicopters in large quantities during the war.In the United States, Russian-born engineer Igor Sikorsky and Wynn Laurence LePage competed to produce the U.S. military's first helicopter. LePage received the patent rights to develop helicopters patterned after the Fw 61, and built the XR-1 in 1941. Meanwhile, Sikorsky settled on a simpler, single-rotor design, the VS-300 of 1939, which turned out to be the first practical single lifting-rotor helicopter design. After experimenting with configurations to counteract the torque produced by the single main rotor, Sikorsky settled on a single, smaller rotor mounted on the tail boom.
Developed from the VS-300, Sikorsky's R-4 of 1942 was the first large-scale mass-produced helicopter, with a production order for 100 aircraft. The R-4 was the only Allied helicopter to serve in World War II, used primarily for search and rescue (by the USAAF 1st Air Commando Group) in the Burma campaign; in Alaska; and in other areas with harsh terrain. Total production reached 131 helicopters before the R-4 was replaced by other Sikorsky helicopters such as the R-5 and the R-6. In all, Sikorsky produced over 400 helicopters before the end of World War II.
While LePage and Sikorsky built their helicopters for the military, Bell Aircraft hired Arthur Young to help build a helicopter using Young's two-blade teetering rotor design, which used a weighted stabilizer bar placed at a 90° angle to the rotor blades. The subsequent Model 30 helicopter of 1943 showed the design's simplicity and ease of use. The Model 30 was developed into the Bell 47 of 1945, which became the first helicopter certified for civilian use in the United States (March 1946). Produced in several countries, the Bell 47 was the most popular helicopter model for nearly 30 years.
Turbine age: In 1951, at the urging of his contacts at the Department of the Navy, Charles Kaman modified his K-225 synchropter—a design for a twin-rotor helicopter concept first pioneered by Anton Flettner in 1939, with the aforementioned Fl 265 piston-engined design in Germany—with a new kind of engine, the turboshaft engine. This adaptation of the turbine engine provided a large amount of power to Kaman's helicopter with a lower weight penalty than piston engines, with their heavy engine blocks and auxiliary components. On 11 December 1951, the Kaman K-225 became the first turbine-powered helicopter in the world. Two years later, on 26 March 1954, a modified Navy HTK-1, another Kaman helicopter, became the first twin-turbine helicopter to fly. However, it was the Sud Aviation Alouette II that would become the first helicopter to be produced with a turbine-engine.
Reliable helicopters capable of stable hover flight were developed decades after fixed-wing aircraft. This is largely due to higher engine power density requirements than fixed-wing aircraft. Improvements in fuels and engines during the first half of the 20th century were a critical factor in helicopter development. The availability of lightweight turboshaft engines in the second half of the 20th century led to the development of larger, faster, and higher-performance helicopters. While smaller and less expensive helicopters still use piston engines, turboshaft engines are the preferred powerplant for helicopters today.
Safety:
Maximum speed limit: There are several reasons a helicopter cannot fly as fast as a fixed-wing aircraft. When the helicopter is hovering, the outer tips of the rotor travel at a speed determined by the length of the blade and the rotational speed. In a moving helicopter, however, the speed of the blades relative to the air depends on the speed of the helicopter as well as on their rotational speed. The airspeed of the advancing rotor blade is much higher than that of the helicopter itself. It is possible for this blade to exceed the speed of sound, and thus produce vastly increased drag and vibration.
At the same time, the advancing blade creates more lift traveling forward, the retreating blade produces less lift. If the aircraft were to accelerate to the air speed that the blade tips are spinning, the retreating blade passes through air moving at the same speed of the blade and produces no lift at all, resulting in very high torque stresses on the central shaft that can tip down the retreating-blade side of the vehicle, and cause a loss of control. Dual counter-rotating blades prevent this situation due to having two advancing and two retreating blades with balanced forces.
Because the advancing blade has higher airspeed than the retreating blade and generates a dissymmetry of lift, rotor blades are designed to "flap" – lift and twist in such a way that the advancing blade flaps up and develops a smaller angle of attack. Conversely, the retreating blade flaps down, develops a higher angle of attack, and generates more lift. At high speeds, the force on the rotors is such that they "flap" excessively, and the retreating blade can reach too high an angle and stall. For this reason, the maximum safe forward airspeed of a helicopter is given a design rating called VNE, velocity, never exceed. In addition, it is possible for the helicopter to fly at an airspeed where an excessive amount of the retreating blade stalls, which results in high vibration, pitch-up, and roll into the retreating blade.
Noise: At the end of the 20th century, designers began working on helicopter noise reduction. Urban communities have often expressed great dislike of noisy aviation or noisy aircraft, and police and passenger helicopters can be unpopular because of the sound. The redesigns followed the closure of some city heliports and government action to constrain flight paths in national parks and other places of natural beauty.
Vibration: To reduce vibration, all helicopters have rotor adjustments for height and weight. A maladjusted helicopter can easily vibrate so much that it will shake itself apart. Blade height is adjusted by changing the pitch of the blade. Weight is adjusted by adding or removing weights on the rotor head and/or at the blade end caps. Most also have vibration dampers for height and pitch. Some also use mechanical feedback systems to sense and counter vibration. Usually the feedback system uses a mass as a "stable reference" and a linkage from the mass operates a flap to adjust the rotor's angle of attack to counter the vibration. Adjustment can be difficult in part because measurement of the vibration is hard, usually requiring sophisticated accelerometers mounted throughout the airframe and gearboxes. The most common blade vibration adjustment measurement system is to use a stroboscopic flash lamp, and observe painted markings or coloured reflectors on the underside of the rotor blades. The traditional low-tech system is to mount coloured chalk on the rotor tips, and see how they mark a linen sheet. Health and Usage Monitoring Systems (HUMS) provide vibration monitoring and rotor track and balance solutions to limit vibration. Gearbox vibration most often requires a gearbox overhaul or replacement. Gearbox or drive train vibrations can be extremely harmful to a pilot. The most severe effects are pain, numbness, and loss of tactile discrimination or dexterity. |
mil_tactics_continued_pretraining.csv | Helicopter | Usually the feedback system uses a mass as a "stable reference" and a linkage from the mass operates a flap to adjust the rotor's angle of attack to counter the vibration. Adjustment can be difficult in part because measurement of the vibration is hard, usually requiring sophisticated accelerometers mounted throughout the airframe and gearboxes. The most common blade vibration adjustment measurement system is to use a stroboscopic flash lamp, and observe painted markings or coloured reflectors on the underside of the rotor blades. The traditional low-tech system is to mount coloured chalk on the rotor tips, and see how they mark a linen sheet. Health and Usage Monitoring Systems (HUMS) provide vibration monitoring and rotor track and balance solutions to limit vibration. Gearbox vibration most often requires a gearbox overhaul or replacement. Gearbox or drive train vibrations can be extremely harmful to a pilot. The most severe effects are pain, numbness, and loss of tactile discrimination or dexterity.
Loss of tail-rotor effectiveness: For a standard helicopter with a single main rotor, the tips of the main rotor blades produce a vortex ring in the air, which is a spiraling and circularly rotating airflow. As the craft moves forward, these vortices trail off behind the craft.
When hovering with a forward diagonal crosswind, or moving in a forward diagonal direction, the spinning vortices trailing off the main rotor blades will align with the rotation of the tail rotor and cause an instability in flight control.
When the trailing vortices colliding with the tail rotor are rotating in the same direction, this causes a loss of thrust from the tail rotor. When the trailing vortices rotate in the opposite direction of the tail rotor, thrust is increased. Use of the foot pedals is required to adjust the tail rotor's angle of attack, to compensate for these instabilities.
These issues are due to the exposed tail rotor cutting through open air around the rear of the vehicle. This issue disappears when the tail is instead ducted, using an internal impeller enclosed in the tail and a jet of high pressure air sideways out of the tail, as the main rotor vortices can not impact the operation of an internal impeller.
Critical wind azimuth: For a standard helicopter with a single main rotor, maintaining steady flight with a crosswind presents an additional flight control problem, where strong crosswinds from certain angles will increase or decrease lift from the main rotors. This effect is also triggered in a no-wind condition when moving the craft diagonally in various directions, depending on the direction of main rotor rotation.
This can lead to a loss of control and a crash or hard landing when operating at low altitudes, due to the sudden unexpected loss of lift, and insufficient time and distance available to recover.
Transmission: Conventional rotary-wing aircraft use a set of complex mechanical gearboxes to convert the high rotation speed of gas turbines into the low speed required to drive main and tail rotors. Unlike powerplants, mechanical gearboxes cannot be duplicated (for redundancy) and have always been a major weak point in helicopter reliability. In-flight catastrophic gear failures often result in gearbox jamming and subsequent fatalities, whereas loss of lubrication can trigger onboard fire. Another weakness of mechanical gearboxes is their transient power limitation, due to structural fatigue limits. Recent EASA studies point to engines and transmissions as prime cause of crashes just after pilot errors.
By contrast, electromagnetic transmissions do not use any parts in contact; hence lubrication can be drastically simplified, or eliminated. Their inherent redundancy offers good resilience to single point of failure. The absence of gears enables high power transient without impact on service life. The concept of electric propulsion applied to helicopter and electromagnetic drive was brought to reality by Pascal Chretien who designed, built and flew world's first man-carrying, free-flying electric helicopter. The concept was taken from the conceptual computer-aided design model on 10 September 2010 to the first testing at 30% power on 1 March 2011 – less than six months. The aircraft first flew on 12 August 2011. All development was conducted in Venelles, France.
Hazards: As with any moving vehicle, unsafe operation could result in loss of control, structural damage, or loss of life. The following is a list of some of the potential hazards for helicopters:
Settling with power is when the aircraft has insufficient power to arrest its descent. This hazard can develop into vortex ring state if not corrected early.
Vortex ring state is a hazard induced by a combination of low airspeed, high power setting, and high descent rate. Rotor-tip vortices circulate from the high pressure air below the rotor disk to low pressure air above the disk, so that the helicopter settles into its own descending airflow. Adding more power increases the rate of air circulation and aggravates the situation. It is sometimes confused with settling with power, but they are aerodynamically different.
Retreating blade stall is experienced during high speed flight and is the most common limiting factor of a helicopter's forward speed.
Ground resonance is a self-reinforcing vibration that occurs when the lead/lag spacing of the blades of an articulated rotor system becomes irregular.
Low-G condition is an abrupt change from a positive G-force state to a negative G-force state that results in loss of lift (unloaded disc) and subsequent roll over. If aft cyclic is applied while the disc is unloaded, the main rotor could strike the tail causing catastrophic failure.
Dynamic rollover in which the helicopter pivots around one of the skids and 'pulls' itself onto its side (almost like a fixed-wing aircraft ground loop).
Powertrain failures, especially those that occur within the shaded area of the height-velocity diagram.
Tail rotor failures which occur from either a mechanical malfunction of the tail rotor control system or a loss of tail rotor thrust authority, called "loss of tail-rotor effectiveness" (LTE).
Brownout in dusty conditions or whiteout in snowy conditions.
Low rotor RPM, is when the engine cannot drive the blades at sufficient RPM to maintain flight.
Rotor overspeed, which can over-stress the rotor hub pitch bearings (brinelling) and, if severe enough, cause blade separation from the aircraft.
Wire and tree strikes due to low altitude operations and take-offs and landings in remote locations.
Controlled flight into terrain in which the aircraft is flown into the ground unintentionally due to a lack of situational awareness.
Mast bumping in some helicopters
List of fatal crashes:
World records:
See also:
References:
Notes:
Footnotes:
Bibliography:
External links:
"Helicopterpage.com – How Helicopters Work" Complete site explaining different aspects of helicopters and how they work.
"Planes That Go Straight Up". 1935 article about early development and research into helicopters.
"Flights — of the Imagination". 1918 article on helicopter design concepts.
"Twin Windmill Blades Fly Wingless Ship" Popular Mechanics, April 1936
Silent (Russian-language intertitled) video about the Cheremukhin/Yuriev TsAGI 1-EA pioneer helicopter
American Helicopter Society
Graham Warwick (17 June 2016). "How The Helicopter Has Developed". Aviation Week & Space Technology. Getting from idea to reality took far longer for the helicopter than for the fixed-wing aircraft. |
mil_tactics_continued_pretraining.csv | Help:Authority control | Supported files: The following authority files are supported on the English Wikipedia:
See also: Authority control |
mil_tactics_continued_pretraining.csv | Herbst maneuver | See also: Supermaneuverability
Immelmann turn
References:
External links: "X-31 in flight, Herbst maneuver." (NASA video)
"X-31 Demonstrating High Angle of Attack – Herbst Maneuver." (NASA photo)
USAF & NATO Report RTO-TR-015 AC/323/(HFM-015)/TP-1 (2001) |
mil_tactics_continued_pretraining.csv | History of aerial warfare | Fictional predictions: Since early history, various cultures developed myths of flying gods and deities, some of whom such as Zeus could throw thunderbolts from on high at earthbound humans. There were also fictions of humans finding ways to fly, such as the Greek Daedalus and Icarus. A logical combination was to imagine mundane humans flying – and making military use of their ability to fly. Imagination long preceded the technology needed for such warfare to be actually carried out.
In the third book of Jonathan Swift's Gulliver's Travels (1726), the King of the flying island of Laputa resorts to bombarding enemies and rebellious subjects with heavy rocks thrown from the air.
What might be the first detailed fictional depiction of what is now called an Air Force can be found in "The Wicked Prince" by Hans Christian Andersen (1840). The story's world-conquering Prince, of boundless ambition and cruelty, orders "a magnificent ship to be constructed, with which he could sail through the air; it was gorgeously fitted out and of many colours; like the tail of a peacock, it was covered with thousands of eyes, but each eye was the barrel of a gun. The prince sat in the center of the ship, and had only to touch a spring in order to make thousands of bullets fly out in all directions, while the guns were at once loaded again". Later, the Prince develops an improved model, able to also fire "Steel Thunderbolts", and orders many thousands of them built to make up a massive air flotilla manned by his troops.
H. G. Wells in The War in the Air (1907) grasped the full implications of aerial warfare and air power and how it would revolutionize warfare. Many of the book's predictions – such as the devastating strategic bombing of cities or how bombing from the air would make surface dreadnoughts obsolete in naval warfare – only came about in World War II rather than the first which broke out a few years after the book's publication.
Kite warfare: The earliest documented aerial warfare took place in ancient China, when a manned kite was set off to spy for military intelligence and communication.
Balloon warfare:
Ancient China: In or around the second or third century, a prototype hot air balloon, the Kongming lantern, was invented in China, serving as a military communication station.
Europe: Some minor warfare use was made of balloons in the infancy of aeronautics. The first instance was by the French Aerostatic Corps at the Battle of Fleurus in 1794, who used a tethered balloon, L'entreprenant, to gain a vantage point.
Balloons had disadvantages. They could not fly in bad weather, fog, or high winds. They were at the mercy of the winds and were also very large targets.
Austrian use at Venice in 1849: The first aggressive use of balloons in warfare took place in 1849. Austrian imperial forces besieging Venice attempted to float some 200 paper hot air balloons each carrying a 24–30-pound (11–14 kg) bomb that was to be dropped from the balloon with a time fuse over the besieged city. The balloons were launched mainly from land; however, some were also launched from the side-wheel steamer SMS Vulcano that acted as a balloon carrier. The Austrians used smaller pilot balloons to determine the correct fuse settings. At least one bomb fell in the city; however, due to the wind changing after launch, most of the balloons missed their target, and some drifted back over Austrian lines and the launching ship Vulcano.
American Civil War:
Union Army Balloon Corps: The American Civil War was the first war to witness significant use of aeronautics in support of battle. Thaddeus Lowe made noteworthy contributions to the Union war effort using a fleet of balloons he created In June 1861 professor Thaddeus S. C. Lowe left his work in the private sector and offered his services as an Aeronaut to President Lincoln, who took some interest in the idea of an air war. Lowe's demonstration of flying a balloon over Washington, DC, and transmitting a telegraph message to the ground was enough to have him introduced to the commanders of the topographical engineers; initially it was thought balloons could be used for preparing better maps.
Lowe's first action was a free flight observation of the Confederate positions at the First Battle of Bull Run in July 1861.
Lowe was called to Fort Corcoran and ascended in order to spot rebel encampments. With flag signals he directed artillery fire on the rebels.
Lowe and other balloonists formed the Union Army Balloon Corps. Lowe insisted on the strict use of tethered (as opposed to free) flight because of concern about being shot down over enemy lines and punished as spies. By attaining altitudes from 1,000 feet (300 m) to as much as 3½ miles, an expansive view of the battle field and beyond could be had.
As the Confederates retreated, the war turned into the Peninsular Campaign. Due to the heavy forests on the peninsula, the balloons were unable to follow on land so a coal barge was converted to operate the balloons. The balloons and their gas generators were loaded aboard and taken down the Potomac, where reconnaissance of the peninsula could continue.
At the Battle of Fair Oaks, Lowe was able to view the enemy army advancing and sent a dispatch to have reserves sent.
The balloon corps was later assigned to the engineers corps. By August 1863, the Union Army Balloon Corps was disbanded.
Confederate Army: The Confederate Army also made use of balloons, but they were gravely hampered by lack of supplies due to embargoes. They were forced to fashion their balloons from colored silk dress-making material, and their use was limited by the infrequent supply of gas in Richmond, Virginia. By the summer of 1863, all balloon reconnaissance of the American Civil War had ceased.
Before World War I: The Declaration Prohibiting the Discharge of Projectiles and Explosives from Balloons Archived 2012-09-28 at the Wayback Machine, part of the 1907 Hague Convention ratified by the United States, Great Britain and China, outlawed aircraft ordnance and aerial bombing.
The United States Navy showed interest in naval aviation from the turn of the 20th century. In August 1910 Jacob Earl Fickel conducted his first experiments with Glenn Curtiss – shooting a gun from an airplane. In 1910–1911 the Navy conducted experiments which proved the practicality of carrier-based aviation. On November 14, 1910, near Hampton Roads, Virginia, civilian pilot Eugene Ely took off from a wooden platform installed on the scout cruiser USS Birmingham (CL-2). He landed safely on shore a few minutes later. Ely proved several months later that it was also possible to land on a ship. On January 18, 1911, he landed on a platform attached to the American cruiser USS Pennsylvania (ACR-4) in San Francisco harbor.
The first use of airplanes in an actual war occurred in the 1911 Italo-Turkish War with Italian Army Air Corps Blériot XI and Nieuport IV monoplanes bombing a Turkish camp at Ain Zara in Libya. In the First Balkan War (1912) the Bulgarian Air Force bombed Turkish positions at Adrianople, while the Greek Aviation performed, over the Dardanelles, the first naval/air co-operation mission in history.
The United States Army used airplanes for scouting and communications in its 1916–1917 campaign against Pancho Villa. Air reconnaissance was carried out in both wars too. The air-dropped bomb was extensively used during the First Balkan War (including in the first night-bombing on 7 November 1912), and subsequently used by the Imperial German Air Service during World War I.
World War I: In World War I both sides initially made use of tethered balloons and airplanes for observation purposes, both for information gathering and directing of artillery fire. At first, enemy pilots simply exchanged hand waves, but a desire to prevent enemy observation led to airplane pilots attacking other airplanes and balloons, initially with small arms carried in the cockpit.
The addition of deflector plates to the back of propellers by French pilot Roland Garros and designer Raymond Saulnier in the Morane-Saulnier monoplane was the first example of an aircraft able to fire through its propeller, permitting Garros to score three victories in April 1915. Dutch aircraft designer Anthony Fokker developed a successful gun synchronizer in 1915, resulting in German Leutnant Kurt Wintgens scoring the first known victory for a synchronized gun-equipped fighter aircraft, on July 1, 1915.
The Allies quickly developed their own synchronization gears, leading to the birth of aerial combat, more commonly known as dogfighting. Tactics for dogfighting evolved by trial and error. The German ace Oswald Boelcke created eight essential rules of dogfighting, the Dicta Boelcke.
Both sides also made use of aircraft for bombing, strafing, maritime reconnaissance, antisubmarine warfare, and the dropping of propaganda. The German military made use of Zeppelins and, later on, bombers such as the Gotha, to drop bombs on Southern England. By the end of the war airplanes had become specialized into bombers, fighters, and observation (reconnaissance) aircraft. |
mil_tactics_continued_pretraining.csv | History of aerial warfare | Dutch aircraft designer Anthony Fokker developed a successful gun synchronizer in 1915, resulting in German Leutnant Kurt Wintgens scoring the first known victory for a synchronized gun-equipped fighter aircraft, on July 1, 1915.
The Allies quickly developed their own synchronization gears, leading to the birth of aerial combat, more commonly known as dogfighting. Tactics for dogfighting evolved by trial and error. The German ace Oswald Boelcke created eight essential rules of dogfighting, the Dicta Boelcke.
Both sides also made use of aircraft for bombing, strafing, maritime reconnaissance, antisubmarine warfare, and the dropping of propaganda. The German military made use of Zeppelins and, later on, bombers such as the Gotha, to drop bombs on Southern England. By the end of the war airplanes had become specialized into bombers, fighters, and observation (reconnaissance) aircraft. By 1916, aerial combat had already progressed to the point where dogfighting tactics based on such doctrines as the Dicta Boelcke allowed air supremacy to be achieved. During the course of the war, new designs led to air supremacy shifting back and forth between the Germans and Allies.
Interwar period: Between 1918 and 1939 aircraft technology developed very rapidly. In 1918 most aircraft were biplanes with wooden frames, canvas skins, wire rigging and air-cooled engines. Biplanes continued to be the mainstay of air forces around the world and were used extensively in conflicts such as the Spanish Civil War. Most industrial countries also created air forces separate from the army and navy. However, by 1939 military biplanes were in the process of being replaced with metal framed monoplanes, often with stressed skins and liquid-cooled engines. Top speeds had tripled; altitudes doubled; ranges and payloads of bombers increased enormously.
Some theorists, especially in Britain, considered that aircraft would become the dominant military arm in the future. They imagined that a future war would be won entirely by the destruction of the enemy's military and industrial capability from the air. The Italian general Giulio Douhet, author of The Command of the Air, was a seminal theorist of this school, which has been associated with Stanley Baldwin's statement that "the bomber will always get through"; that is, regardless of air defences, sufficient raiders will survive to rain destruction on the enemy's cities. This led to what would later be called a strategy of deterrence and a "bomber gap", as nations measured air force power by number of bombers.
Others, such as General Billy Mitchell in the United States, saw the potential of air power to augment the striking power of naval surface fleets. German and British pilots had experimented with aerial bombing of ships and air-dropped torpedoes during World War I with mixed results. The vulnerability of capital ships to aircraft was demonstrated on 21 July 1921 when a squadron of bombers commanded by General Mitchell sank the ex-German battleship SMS Ostfriesland with aerial bombs; although the Ostfriesland was stationary and defenseless during the exercise, its destruction demonstrated the potency of air planes against ships.
It was during the Banana Wars, while fighting bandits, freedom fighters and insurgents in places like Haiti, the Dominican Republic and Nicaragua that United States Marine Corps aviators began to experiment with air-ground tactics to support Marines on the ground. In Haiti the Marines began to experiment with dive bombing and in Nicaragua where they began to perfect it. While other nations and services had tried variations of this technique, Marine aviators were the first to include it in their tactical doctrine.
Germany was banned from possessing an air force by the terms of the World War I armistice. The German military continued to train its soldiers as pilots clandestinely until Hitler was ready to openly defy the ban. This was done by forming the Deutscher Luftsportverband, a flying enthusiast's club, and training pilots as civilians, and some German pilots were even sent to the Soviet Union for secret training; a trained air force was thus ready as soon as the word was given. This was the beginning of the Luftwaffe.
World War II: Military aviation came into its own during World War II. The increased performance, range, and payload of contemporary aircraft meant that air power could move beyond the novelty applications of World War I, becoming a central striking force for all the combatant nations.
Over the course of the war, several distinct roles emerged for the application of air power.
Strategic bombing: Strategic bombing of civilian targets from the air was first proposed by the Italian theorist General Giulio Douhet. In his book The Command of the Air (1921), Douhet argued future military leaders could avoid falling into bloody World War I–style trench stalemates by using aviation to strike past the enemy's forces directly at their vulnerable civilian populations. Douhet believed such strikes would cause these populations to force their governments to surrender.
Douhet's ideas were paralleled by other military theorists who emerged from World War I, including Sir Hugh Trenchard in Britain. In the interwar period, Britain and the United States became the most enthusiastic supporters of the strategic bombing theory, with each nation building specialized heavy bombers specifically for this task.
Japanese strategic bombing
Strategic bombing, mostly targeting large Chinese cities, was independently conducted during the Second Sino-Japanese war and World War II by the Imperial Japanese Navy Air Service and the Imperial Japanese Army Air Service. There were also air raids on Philippines and Australia, as well as the cities in Burma and Malaysia. The Navy and Army air services used tactical bombing against ships and military positions, as at Pearl Harbor.
Luftwaffe
In the early days of World War II, the Luftwaffe launched devastating air attacks against besieged cities. During the Battle of Britain, the Luftwaffe, frustrated in its attempts to gain air superiority, turned to bombing British cities. However, these raids did not have the effect predicted by prewar theorists.
Soviet Red Air Force
Although the rapid industrialization the Soviet Union experienced in the 1930s had the potential to enable the Soviet Air Forces to be effective against the Luftwaffe, Stalin's purges left the organization weakened. However, when Germany invaded in June 1941, the size of the Soviet Air Forces allowed it to absorb horrendous casualties and still maintain capability. Despite the near collapse of Soviet forces in 1941, they survived, as German forces outran their supply lines and the Americans and British provided Lend Lease assistance.
Although strategic bombing requires that the enemy's industrial war capacity be neutralized, some Soviet factories were moved far out of reach of the Luftwaffe's bombers. Because the Luftwaffe's resources were needed to support the German army, the Luftwaffe became overstretched, and even victorious battles degraded Germany's air force due to attrition. By 1943, the Soviets were able to produce considerably more airplanes than their German rivals; for example, at Kursk, the Soviets had twice the number of airplanes that the Luftwaffe had. Utilizing overwhelming numerical superiority, Soviet forces were able to drive the Germans out of Soviet territory and take the war to Germany.
Allied air forces in Europe
The British started a strategic bombing campaign in 1940 that was to last for the rest of the war. Early British bombers were all twin-engined designs, and the 1939 Battle of the Heligoland Bight had shown the vulnerability of bombers to fighter attack. Therefore, RAF Bomber Command turned to a policy of area bombing at night. Later in the war pathfinder tactics, radio location, ground mapping radar, and very low-level bombing enabled specific targets to be attacked.
When the USAAF arrived in England in 1942, the Americans were convinced they could carry out successful daylight raids. The U.S. Eighth Air Force was equipped with high-altitude four-engined designs. The new bombers also featured a stronger defensive armament. Flying in daylight in large formations, U.S. doctrine held tactical formations of heavy bombers would be sufficient to gain air superiority without escort fighters. The intended raids would hit hard on chokepoints in the German war economy such as oil refineries or ball bearing factories.
The USAAF was compelled to change its doctrine since bombers alone, no matter how heavily armed, could not achieve air superiority against single-engined fighters. In a series of missions in 1943 that penetrated beyond the range of fighter cover, there were loss rates up to twenty percent. The Allies lost 160,000 airmen and 33,700 planes during World War II, and almost 68,000 U.S. airmen died.
Air superiority
During the Battle of Britain, many of the best Luftwaffe pilots had been forced to bail out over British soil, where they were captured. As the quality of the Luftwaffe fighter arm decreased, the Americans introduced the long-range escort fighters, carrying drop tanks such as the North American P-51 Mustang. Newer, inexperienced German pilots—flying potentially superior aircraft, gradually became less and less effective at thinning the late-war bomber streams. Adding fighters to the daylight raids gave the bombers much-needed protection and greatly improved the impact of the strategic bombing effort.
Over time, from 1942 to 1944, the Allies' air forces became stronger while the Luftwaffe weakened. During 1944, Germany's air force lost control of Germany's skies. As a result, nothing in Germany could be protected whether it was army units, factories, civilians in cities, or even the nation's capital. |
mil_tactics_continued_pretraining.csv | History of aerial warfare | airmen died.
Air superiority
During the Battle of Britain, many of the best Luftwaffe pilots had been forced to bail out over British soil, where they were captured. As the quality of the Luftwaffe fighter arm decreased, the Americans introduced the long-range escort fighters, carrying drop tanks such as the North American P-51 Mustang. Newer, inexperienced German pilots—flying potentially superior aircraft, gradually became less and less effective at thinning the late-war bomber streams. Adding fighters to the daylight raids gave the bombers much-needed protection and greatly improved the impact of the strategic bombing effort.
Over time, from 1942 to 1944, the Allies' air forces became stronger while the Luftwaffe weakened. During 1944, Germany's air force lost control of Germany's skies. As a result, nothing in Germany could be protected whether it was army units, factories, civilians in cities, or even the nation's capital. German soldiers and civilians began to be slaughtered in the tens of thousands by aerial bombardment.
Effectiveness
Strategic bombing by non-atomic means did not however win the war for the Allies, nor did it succeed in breaking the will to resist of the German (and Japanese) people. But in the words of the German armaments minister Albert Speer, it created "a second front in the air." Speer succeeded in increasing the output of armaments right up to mid-1944 in spite of the bombing. Still, the war against the British and American bombers demanded enormous amounts of resources: antiaircraft guns, day and night fighters, radars, searchlights, manpower, ammunition, and fuel.
On the Allied side, strategic bombing diverted material resources, equipment (such as radar) aircraft, and manpower away from the Battle of the Atlantic and Allied armies. As a result, German army groups in Russia, Italy, and France rarely saw friendly aircraft and constantly ran short of tanks, trucks, and anti-tank weapons.
U.S. Bombing of Japan
In June 1944, Boeing B-29 Superfortresses launched from China, bombed Japanese factories. From November 1944, increasingly intense raids were launched from bases closer to Japan. Tactics evolved from high-altitude to lower altitude attacks, largely removing most defensive guns and switching to incendiary bombs. These attacks devastated many Japanese cities.
In August 1945, B-29 Superfortresses dropped atomic bombs on Hiroshima and Nagasaki while the Soviets invaded Manchuria. The Japanese then surrendered unconditionally, officially ending World War II.
Tactical air support: By contrast with the British strategists, the primary purpose of the Luftwaffe was to support the Army. This accounted for the presence of large numbers of medium bombers and dive bombers on strength and the scarcity of long-range heavy bombers. This 'flying artillery' greatly assisted in the successes of the German Army in the Battle of France (1940) . Hitler determined air superiority was essential for the invasion of Britain. When this was not achieved in the Battle of Britain, the invasion was canceled, making this the first major battle whose outcome was determined primarily in the air.
The war in Russia forced the Luftwaffe to devote the majority of its resources to providing tactical air support for the beleaguered German army. In that role, the Luftwaffe used the Junkers Ju 87, Henschel Hs 123 and modified fighters such as Messerschmitt Bf 109s and Focke-Wulf Fw 190s.
The Red Air Force was also primarily used in the tactical support role, and towards the end of the war was very effective in the support of the Red Army in its advance across Eastern Europe. An aircraft of major importance to the Soviets was the Ilyushin Il-2 Sturmovik "flying artillery" which made life very difficult for panzer crews, and the Il-2 played an important part in the Soviet victory in one of the biggest tank battles in history at Kursk.
Military transport and use of airborne troops: Military transport was invaluable to all sides in maintaining supply and communications of ground troops, and was used on many notable occasions such as resupply of German troops in and around Stalingrad after Operation Uranus, and employment of airborne troops. After the first trials in use of airborne troops by the Red Army displayed in the early 1930s many European nations and Japan also formed airborne troops, and these saw extensive service on in all theatres of the Second World War.
However their effectiveness as shock troops employed to surprise enemy static troops proved to be of limited success. Most airborne troops served as light infantry by the end of the war despite attempts at massed use in the Western Theatre by US and Britain during Operation Market Garden.
Naval aviation: Aircraft and the aircraft carrier first became important in naval battles in World War II. Carrier-based aircraft were specialized as dive bombers, torpedo bombers, and fighters.
Surface-based aircraft such as the Consolidated PBY Catalina and Short Sunderland helped finding submarines and surface fleets. The aircraft carrier replaced the battleship as the most powerful naval offensive weapons system as battles between fleets were increasingly fought out of gun range by aircraft. The Yamato, the most powerful battleship ever built was first turned back by light escort carrier aircraft, and later sunk due to it lacking its own air cover.
The US launched US Army Air Forces land-based bombers from United States Navy carriers in a raid against Tokyo. Smaller carriers were built in large numbers to escort slow cargo convoys or supplement fast carriers. Aircraft for observation or light raids were also carried by battleships and cruisers, while blimps were used to search for attack submarines.
In the Battle of the Atlantic, aircraft carried by low-cost escort carriers were used for antisubmarine patrol, defense, and attack. At the start of the Pacific War in 1941, Japanese carrier-based aircraft sank several US battleships at Pearl Harbor and land-based aircraft sank two large British warships. Engagements between Japanese and American naval fleets were then conducted largely or entirely by aircraft including the battles of Coral Sea, Midway, Bismarck Sea and Philippine Sea.
Cold War: Military aviation in the post-war years was dominated by the needs of the Cold War. The post-war years saw the almost total conversion of combat aircraft to jet power, which resulted in enormous increases in speeds and altitudes of aircraft. Until the advent of the intercontinental ballistic missile major powers relied on high-altitude bombers to deliver their newly developed nuclear deterrent and each country strove to develop the technology of bombers and the high-altitude fighters that could intercept them. The concept of air superiority began to play a heavy role in aircraft designs for both the United States and the Soviet Union.
The Americans developed and made extensive use of the high-altitude observation aircraft for intelligence-gathering. The Lockheed U-2, and later the Lockheed SR-71 Blackbird were developed in great secrecy. The U-2 at its time was expected to be invulnerable to defensive measures due to its extreme altitude capabilities. It therefore came as a shock when the Soviets downed one piloted by Gary Powers with a surface-to-air missile.
Air combat was also transformed through increased use of air-to-air guided missiles with increased sophistication in guidance and increased range. In the 1970s and 1980s it became clear that speed and altitude was not enough to protect a bomber against air defences. The emphasis shifted therefore to maneuverable attack aircraft that could fly 'under the radar', at altitudes of a few hundred feet.
Korean War: The Korean War was best remembered for jet combat, but was one of the last major wars where propeller-powered fighters such as the North American P-51 Mustang, Vought F4U Corsair and aircraft carrier-based Hawker Sea Fury and Supermarine Seafire were used.: 174 Turbojet fighter aircraft such as Lockheed F-80 Shooting Stars, Republic F-84 Thunderjets and Grumman F9F Panthers came to dominate the skies, overwhelming North Korea's propeller-driven Yakovlev Yak-9s and Lavochkin La-9s.: 182
From 1950, North Koreans flew the Soviet-made MiG-15 jet fighters which introduced the near-sonic speeds of swept wings to air combat. Though an open secret during the war, the most formidable pilots today now admit that they were experienced Soviet Air Force pilots, a casus belli: 182 deliberately overlooked by the UN allied forces who suspected the use of Russians but were reluctant to engage in open war with the Soviet Union and the People's Republic of China.
At first, UN jet fighters, which also included Royal Australian Air Force Gloster Meteors, had some success, but straight winged jets were soon outclassed in daylight by the superior speed of the MiGs. At night, however, radar-equipped Marine Corps Douglas F3D Skynight night fighters claimed five MiG kills with no losses of their own, and no B-29s under their escort were lost to enemy fighters.
In December 1950, the United States Air Force rushed in their own swept-wing fighter, the North American F-86 Sabre.: 183 The MiG could fly higher, 50,000 vs. 42,000 feet (12,800 m), offering a distinct advantage at the start of combat. In level flight, their maximum speeds were comparable at about 660 mph (1,060 km/h). The MiG could climb better, while the Sabre could dive better with an all-flying tailplane. |
mil_tactics_continued_pretraining.csv | History of aerial warfare | At night, however, radar-equipped Marine Corps Douglas F3D Skynight night fighters claimed five MiG kills with no losses of their own, and no B-29s under their escort were lost to enemy fighters.
In December 1950, the United States Air Force rushed in their own swept-wing fighter, the North American F-86 Sabre.: 183 The MiG could fly higher, 50,000 vs. 42,000 feet (12,800 m), offering a distinct advantage at the start of combat. In level flight, their maximum speeds were comparable at about 660 mph (1,060 km/h). The MiG could climb better, while the Sabre could dive better with an all-flying tailplane. For weapons, the MiG carried two 23 mm and one 37 mm cannon, compared to the Sabre's six .50 (12.7 mm) caliber machine guns. The American .50 caliber machine guns, while not packing the same punch, carried many more rounds and were aimed with a more accurate radar-ranging gunsight. The U.S. pilots also had the advantage of G-suits, which were used for the first time in this war.
Even after the Air Force introduced the advanced F-86, its pilots often struggled against the jets piloted by Soviet pilots, dubbed "honchos". The UN gradually gained air superiority over most of Korea that lasted until the end of the war. It was a decisive factor in helping the UN first advance into the north, and then resist the Chinese invasion of South Korea.: 182–184
After the war, the USAF claimed 792 MiG-15s and 108 additional aircraft shot down by Sabres for the loss of 78 Sabres. Later research reduced the total to 379 victories which is still higher than the 345 losses shown in USSR records.
The Soviets claimed about 1,100 air-to-air victories and 335 combat MiG losses at that time. China's official losses were 231 planes shot down in air-to-air combat (mostly MiG-15s) and 168 other losses. The number of losses of the North Korean Air Force was not revealed. It is estimated that it lost about 200 aircraft in the first stage of the war, and another 70 aircraft after Chinese intervention.
Soviet claims of 650 victories over the Sabres, and China's claims of another 211 F-86s, are considered to be exaggerated by the USAF.
The Korean war was the first time the helicopter was used extensively in a conflict. While helicopters such as the Sikorsky YR-4 were used in World War II, their use was rare, and Jeeps like the Willys MB were the main method of removing an injured soldier. In the Korean war helicopters like the Sikorsky H-19 partially took over in the non combat Medevac area.
Indo-Pakistani Wars: During the Indo-Pakistani War of 1965, both air forces engaged each other for the first time in full scale combat since independence. Both countries hold contradictory combat loss claims for the war and no neutral sources have verified them. The Pakistani Air Force (PAF) claimed 104 Indian Air Force (IAF) aircraft while losing only 19 in the process. India meanwhile claims they lost 35 aircraft while shooting down 73 Pakistani aircraft. During the war, the numerically larger IAF, was not able to achieve any air superiority over qualitatively superior Pakistan Air Force. Thus the war was effectively a stalemate.
By the time of the Indo-Pakistani War of 1971, the Indian Air force had both the numerical as well as qualitative edge. Newly procured Mikoyan-Gurevich MiG-21s and Sukhoi Su-7 were also inducted into service. On the other side of the border the PAF had successfully inducted new aircraft such as the Shenyang J-6 and Dassault Mirage III. The war began with Operation Chengiz Khan, Pakistan's December 3, 1971 pre-emptive strike on 11 Indian airbases. After the initial pre-emptive strike, the PAF adopted a defensive stance in response to the Indian retaliation. As the war progressed, the Indian Air Force continued to fight the PAF over conflict zones, but the number of sorties flown by the PAF gradually decreased day-by-day. The Indian Air Force flew 4,000 sorties while its counterpart, the PAF offered 3,300 in retaliation, partly because of shortages of non-Bengali technical personnel. In East Pakistan, the lone No. 14 Squadron Tail Choppers of the PAF pitched against ten IAF Squadrons was ultimately grounded, when the Tezgaon airfield in Dhaka (Dacca) was put out of commission after seven days of repeated bombings by Indian MiG-21s, Hunters and Su-7s, and consequently allowed the IAF to attain air superiority in the East.
Vietnam War: The Republic of Vietnam Air Force (VNAF) was originally equipped with helicopters such as the Piasecki CH-21 and propeller powered aircraft such as the North American T-28 Trojan when jet aircraft were disallowed by treaty. As US involvement increased, most aircraft were flown by US forces.
Large scale use of helicopters by the US Army in Vietnam led to a new class of airmobile troops, and the introduction of "Air Cavalry" in the U.S., culminating in extensive use of the Bell UH-1 Huey helicopter which would become a symbol of that war, while the Sikorsky CH-54 Tarhe "Skycrane" and Boeing-Vertol CH-47 Chinook lifted heavier loads such as vehicles or artillery. Troops were able to land unexpectedly, strike, and leave again, and evacuate wounded. The specialized AH-1 Cobra was developed from the Huey for escort and ground support duties, The later Soviet campaign in Afghanistan would also see widespread use of helicopters as part of the air assault brigades and regiments.
US forces provided close support of ground force over South Vietnam, and strategic bombing of targets over North Vietnam. Many types flying close support or COIN (Counter Insurgency Warfare) missions were propeller powered types such as the Cessna O-1 Bird Dog and North American OV-10 Bronco FAC spotters, Douglas A-1 Skyraider, Douglas B-26 Invader, and Douglas AC-47 Spooky gunship. Fairchild C-123 Provider and Lockheed C-130 Hercules transports flew supplies into battlefields such as Khe Sanh.
"Fast movers" included the supersonic North American F-100 Super Sabre, while the giant Boeing B-52 Stratofortress would be modified to unload a massive high explosive payload on enemy troop concentrations. The Lockheed AC-130 would become the ultimate gunship, while the AX specification to replace the Skyraider would evolve into the Fairchild Republic A-10 Thunderbolt II.
USAF Republic F-105 Thunderchiefs flew the bulk of strike missions against North Vietnam in Operation Rolling Thunder, while carrier-based Douglas A-4 Skyhawks (which could "buddy-buddy" refuel) were flown by the Navy. That first campaign was marred by carefully measured regulations that prohibited attacks against SAM missile sites and fighter bases, and frequent bombing halts, and produced little in political results. Rolling Thunder saw the first combat use of electronic computers aboard PIRAZ ships to display comprehensive real-time aircraft position information for force commanders.
Lessons learned were applied to the later Operation Linebacker which employed McDonnell Douglas F-4 Phantom II, B-52 Stratofortresses, swing-wing General Dynamics F-111 Aardvarks, LTV A-7 Corsair IIs and all-weather Grumman A-6 Intruders was more successful in bringing North Vietnam to the negotiating table after a massive ground invasion. North Vietnam effectively combined Soviet and Chinese anti-aircraft artillery, SA-2 guided missiles, and MiG fighters to create the most heavily defended airspace up to that time.
US air strikes would combine the use of airborne radar platforms such as the Lockheed EC-121 Warning Star, Boeing KC-135 Stratotankers for air refueling, radar jamming aircraft and specialized "Wild Weasel" units to attack SAM missile sites. Jolly Green Giant helicopter crews escorted by Douglas A-1 Skyraiders ("Sandy"s) would retrieve downed pilots over hostile territory. With the use of "smart" guided bombs late in the war, this would set the model for future US air operations.
Experts were surprised when advanced F-105s were shot down in its first encounter against the elderly but nimble MiG-17. Dogfights were thought to be obsolete in the age of missiles, but pilots now needed maneuverability. The McDonnell Douglas F-4 Phantom II was quickly tasked with protecting against MiGs, but lacked a built-in gun when missiles were still very unreliable. Air combat training schools such as TOPGUN would improve kill ratios, but combat experience started programs that would produce agile air superiority fighters with guns such as the McDonnell Douglas F-15 Eagle by the 1970s.
South Vietnam fell without US air support when faced with a massive assault in 1975. The VNAF Republic of Vietnam Air Force was never supplied with powerful fighters and bombers such as the F-4 Phantom and B-52 which could strike at North Vietnam. |
mil_tactics_continued_pretraining.csv | History of aerial warfare | Experts were surprised when advanced F-105s were shot down in its first encounter against the elderly but nimble MiG-17. Dogfights were thought to be obsolete in the age of missiles, but pilots now needed maneuverability. The McDonnell Douglas F-4 Phantom II was quickly tasked with protecting against MiGs, but lacked a built-in gun when missiles were still very unreliable. Air combat training schools such as TOPGUN would improve kill ratios, but combat experience started programs that would produce agile air superiority fighters with guns such as the McDonnell Douglas F-15 Eagle by the 1970s.
South Vietnam fell without US air support when faced with a massive assault in 1975. The VNAF Republic of Vietnam Air Force was never supplied with powerful fighters and bombers such as the F-4 Phantom and B-52 which could strike at North Vietnam.
Middle East: In the Six-Day War of 1967, the Israeli Air Force launched pre-emptive strikes which destroyed opposing Arab air forces on the ground. The Yom Kippur War of 1973 saw the Arab deployment of mobile 2K12 Kub (SA-6) missiles which proved effective against low-flying Israeli aircraft until they were neutralized by ground forces.
Iran–Iraq War: In the Iran–Iraq War (1980–1988), the use of aerial warfare was continuous. At the war's beginning Iraq attempted to destroy the Islamic Republic of Iran Air Force by bombing its airfields but failed due to lack of pilot training and Iranian air base defenses. The war also saw the first helicopter vs. helicopter engagements between the Iraqi and Iranian air forces.
Falklands War (1982): During the six-week Falklands War British carrier-based British Aerospace Sea Harrier and Hawker Siddeley Harrier flew over 1500 sorties and Avro Vulcans flew long-range bombing missions. 21 Argentine fixed-aircraft were destroyed in the air by British Harriers and Sea Harriers. A further 18 Argentine fixed-wing aircraft were destroyed by British Surface to Air Missiles. 15 Argentine fixed-wing aircraft were destroyed on the ground and 14 were captured. Sixty eight Argentine fixed-wing aircraft were captured or destroyed by British Forces, representing 28% of the 240 fixed-wing aircraft the Argentinians had at the start of the war. When accidents and friendly fire are included, 31% of the total 240 fixed-wing aircraft were lost.
Post Cold War: The collapse of the Soviet Union in 1991 forced Western air forces to undergo a shift from the massive numbers felt to be necessary during the Cold War to smaller numbers of multi-role aircraft. The closure of several military bases overseas and the U.S. Base Realignment and Closure program have served to highlight the effectiveness of aircraft carriers in the absence of dedicated military or air force's bases, as the Falklands War and U.S. operations in the Persian Gulf have highlighted.
The advent of precision-guided munitions have allowed for strikes at arbitrary surface targets once proper reconnaissance is performed (network-centric warfare).
The Stockholm International Peace Research Institute (SIPRI) has noted that sales of combat aircraft can have a destabilizing effect because of their ability to quickly strike neighboring countries, such as during Operation Orchard in 2007 when the Israelis unilaterally attacked Syria.
Gulf War (1991): The role of air power in modern warfare was dramatically demonstrated during the Gulf War in 1991. Air attacks were made on Iraqi command and control centers, communications facilities, supply depots, and reinforcement forces. Air superiority over Iraq was gained before major ground combat began.
The initial strikes were composed of Tomahawk cruise missiles launched from ships, Lockheed F-117 Nighthawk stealth bombers with an armament of laser-guided bombs, and aircraft armed with anti-radar missiles. These first attacks destroyed the air defence network and allowed fighter-bombers to gain air superiority over the country.
Fairchild Republic A-10 Thunderbolt IIs attacked Iraqi armored forces with Gatling guns and Maverick missiles, supporting the advance of US ground troops. Attack helicopters, fired laser guided Hellfire missiles and TOW missiles. The allied air fleet also made use of AWACS aircraft and Boeing B-52 Stratofortress bombers.
The aerial strike force was made up of over 2,250 combat aircraft, which included 1,800 US aircraft, which fought against an Iraqi force of about 500 primarily composed of MiG-29 and Mirage F1 fighters. More than 88,000 combat missions had been flown by allied forces with over 88,000 tons of bombs dropped by the end of the fifth week.
Kargil War (1999): On 11 May 1999, the Indian Air Force was called in to provide helicopters for close air support to the Indian Army at the height of the Kargil conflict with Pakistan. PAF was not a party in this conflict therefore entire IAF effort remained concentrated on unopposed ground operations. The first strikes were launched on the 26 May, when the Indian Air Force struck infiltrator positions with fighter aircraft and helicopter gunships. The initial strikes saw MiG-27s carrying out offensive sorties, with MiG-21s and MiG-29s providing fighter cover. The IAF also deployed radars and MiG-29 fighters to keep check on Pakistani military movements across the border.
On 27 May, the IAF lost a MiG-21 to enemy action and a MiG-27 to mechanical failure. The following day, a Mi-17 was lost to SAMs while on an offensive sortie. These losses forced the IAF to withdraw helicopters from offensive roles. On 30 May, the IAF called into operation the Mirage 2000 which was deemed the best aircraft in the high-altitude conditions. Mirage 2000s not only had better defence equipment compared to the MiGs, but also gave IAF the ability to carry out aerial raids at night. Indian MiG-29s were used extensively during the 1999 Kargil War in Kashmir by the Indian Air Force to provide fighter escort for Mirage 2000s, which were attacking targets with laser-guided bombs. Kargil Conflict finally came to end with a Decisive Indian Military and Diplomatic Victory.
Eritrean–Ethiopian War (1998–2000): The war became the first to see 4th-generation jet fighters battle with each other. Most of the losses to Eritrean MIG-29s were caused by dogfights with Ethiopian Su-27s.
Iraq War (2003–2011): During the 2003 invasion of Iraq led by US and British forces putatively to defeat the regime of Saddam Hussein, aerial warfare continued to be decisive. The US-British alliance began its air campaign on March 19 with limited nighttime bombing on the Iraqi capital of Baghdad. Several days later, intensive bombardment began. About 14,000 sorties were flown, and at a cost of $1 million each, 800 Tomahawk cruise missiles were fired at numerous targets in Iraq from March 19 until mid-April 2003. By this time Iraqi resistance had largely ended.
Iraqi anti-aircraft weapons were unable to open fire on high-altitude US bombers such as the B-52 or stealth aircraft such as the B-2 bomber and the F-117A. US and British aircraft used radar-detecting devices and aerial reconnaissance to locate Iraqi anti-aircraft weapons. Bunker buster bombs, designed to penetrate and destroy underground bunkers, were dropped on Iraqi command and control centers. Iraqi ground forces could not seriously challenge the American ground forces because of their air supremacy. By mid-April 2003, US-British forces controlled all of Iraq's major cities and oil fields.
2006 Lebanon War: In the beginning of the 2006 Lebanon War, Israel utilized an intensive aerial campaign aimed to eliminate Hezbollah and destroy its military, as stated by Israeli prime minister Ehud Olmert. It also aimed to return kidnapped Israeli soldiers. The campaign started by destroying Lebanese infrastructure and Hezbollah targets. This continued during the 33 days of the war.
Taking into consideration the results of the 1991 and 2003 wars on Iraq and the 1999 war on the former Yugoslavia, the Israeli air force was unable to accomplish its objectives as completely. This partly results from the military doctrine that Hezbollah used in the war which proved effective. There have also been reports during the conflict that a Hezbollah-operated flying drone penetrated Israeli airspace, and returned to Lebanese territory.
2022 Russian invasion of Ukraine: The Russo-Ukrainian War became in 2022 the first conflict in two decades to feature large-scale aerial warfare.
Additionally, the conflict in Ukraine has seen a rise in a new type of aerial warfare involving small, commercially available civilian drones (generally quadcopters) that have been modified to attack enemy positions such as those in buildings, vehicles, and trenches. These attacks are often done by modifying these drones to be able to drop explosives such as grenades, or equipping the drone with explosives and flying it directly towards enemies to self-destruct.
This differs from previous warfare involving unmanned aerial vehicles (UAVs) in that these drones are readily available in large quantities, easily accessible to civilian populations, and require substantially less space and resources to operate compared to traditional larger, fixed-wing UAVs. Apart from being far more accessible to civilian combatants and carrying little to no risk of casualties for the attacking side, they provide the added benefit of quick, high-precision attacks at a fraction of the cost of traditional UAVs. |
mil_tactics_continued_pretraining.csv | History of aerial warfare | Additionally, the conflict in Ukraine has seen a rise in a new type of aerial warfare involving small, commercially available civilian drones (generally quadcopters) that have been modified to attack enemy positions such as those in buildings, vehicles, and trenches. These attacks are often done by modifying these drones to be able to drop explosives such as grenades, or equipping the drone with explosives and flying it directly towards enemies to self-destruct.
This differs from previous warfare involving unmanned aerial vehicles (UAVs) in that these drones are readily available in large quantities, easily accessible to civilian populations, and require substantially less space and resources to operate compared to traditional larger, fixed-wing UAVs. Apart from being far more accessible to civilian combatants and carrying little to no risk of casualties for the attacking side, they provide the added benefit of quick, high-precision attacks at a fraction of the cost of traditional UAVs. This makes them ideal for the type of urban warfare seen throughout the conflict in Ukraine. The majority of these types of attacks are from Ukrainians against Russian invaders.
See also: Aerial warfare
Aeronautics
History of aviation
References:
Sources: Bammi, Y.M. (2002). Kargil 1999, Impregnable Conquered. Gorkha Publishers. pp. xxviii, 558, 65, 8 p. ISBN 978-81-7525-352-0. LCCN 2003305922.
Buckley, J.D.; Buckley, J.J. (1999). Air Power in the Age of Total War. Air Power in the Age of Total War. Indiana University Press. ISBN 978-0-253-21324-2. Retrieved 2022-03-05.
Gross, C.J. (2002). American Military Aviation: The Indispensable Arm. Centennial of flight series. Texas A&M University Press. ISBN 978-1-58544-215-7. Retrieved 2022-03-05.
External links: Official U.S. Army Aviation website
A Brief History of Air Warfare
World War I in photos: Aerial Warfare |
mil_tactics_continued_pretraining.csv | History of military logistics | Antiquity: The most basic requirements of an army are food and water. Neolithic armies were equipped with weapons used for hunting — spears, knives, axes and bows and arrows. By 1150 BCE the Olmecs of Mesoamerica were producing obsidian weapons that were neither hunting weapons nor agricultural tools. Early armies were small due to the practical difficulty of supplying large numbers of people, and their radius of action was likewise limited to 80 to 90 kilometres or so. A ruler or warlord might use an army to extract tax or tribute, but it required a formidable logistical exercise to employ it.
By 700 BCE, Assyria had developed a standing army, with iron replacing bronze in weapons and armour, and cavalry replacing chariots. The Assyrian army may have been able to field as many as 50,000 men, which alone would have required a high degree of logistical acumen, but could operate up to 500 kilometres from its bases. The defences and fortifications of cities had improved to the point where siege warfare had become a complicated technological task, involving scaling ladders, battering rams, siege towers and tunnelling, and could take months. Supply of a besieging force therefore required the transport or construction of special equipment as well as the provision of food and water.
Alexander the Great's father, Philip II of Macedon banned the use of carts on the grounds that they restricted the army's speed and mobility. Alexander continued this practice, with his army relying on horses and mules. He also used camels, many of which were captured along with Darius III's baggage train after the Battle of Issus. Although a cart drawn by a pair of oxen could carry up to 540 kilograms (1,200 lb), compared with about 110 kilograms (250 lb) for pack horses, mules and camels, they could only travel at 3.2 kilometres per hour (2 mph) and be worked for 5 hours per day, whereas pack horses could travel at 6.4 kilometres per hour (4 mph) and be worked for 8 hours per day. Carts were also liable to break down, especially in rough country. Some were necessary, however, for the carriage of heavy siege machinery.
In the imperial Roman army, each eight-man contuberium (squad) had a mule to carry the leather or goatskin tent large enough to accommodate the squad and a handmill to grind grain – as that part of the ration was issued unground – tools and cooking implements. Together with five days' rations, this weighed about 200 kilograms (440 lb), which was easily within the carrying capacity of eight men and a mule. Adding a second mule would allow the contuberium to carry an additional 11 to 13 days' rations. The Roman army ration included bread or biscuit, beef and veal, pork and sucking-pig, mutton and lamb, poutry, lentils, cheese, olive oil, wine or vinegar, and salt. This gave them about 3,400 calories (14,000 kJ) per day, which was similar to that of Alexander's men. An army of 60,000 required 95,000 litres (21,000 imp gal) of water for the men and 720,000 litres (158,000 imp gal) for the animals each day. Each contuberium had their own fire to cook their meals, so firewood had to be collected; Julius Caesar regarded a shortage of firewood to be as dangerous as one of water or fodder. The Olmecs used camales to prepare tortillas that could be retoasted and consumed en route, whereas the Maya lacked a good, transportable food, which made long-distance forays difficult.
The Romans constructed a network of roads to permit the rapid movement of wheeled vehicles. A road network was in existence in Italy as early as the third century BCE, and by the time of Diocletian the Roman Empire had 90,000 kilometres (56,000 mi) of roads. The Roman army had no specialised engineering units, and roads were normally built by local communities, but the army could and did construct roads, especially near the frontiers. Roads were not necessary for the movement of troops, since the soldiers and their pack animals could travel along unimproved dirt tracks, but roads were used by supply trains and a military mail system. The Chinese also built a road network, as did the Maurya Empire in India, the Persians in Asia Minor, and the Moche in South America.
However, it was less expensive to ship a tonne of grain from Egypt to Rome by sea than to move it 80 kilometres (50 mi) by road. The Romans preferred to use sea travel when they could, but it was risky as ships could be lost in storms. In his treatise on The Art of Commanding Armies, Polybius recommended that a commander have a thorough knowledge of how far ships could travel by day and night, and the optimal time and seasons for sea travel. Most ships were small. Six months' supply of grain for an army of 40,000 would have weighed 6,320 tonnes, and could have been carried in 200 ships.
Middle Ages: One of the most significant changes in military organisation in Europe after the fall of the Roman Empire in the fifth century was the shift from a centrally organised army to a combination of military forces made up of local troops who often worked within the household during peace time and were provided food and drink from the high officials in the house. The magnates drew upon their own resources for their men, and during Charlemagne's reign and the reign of the Ottonian dynasty in Germany, some heads of house built permanent storages and dwellings to house men or supplies. Feudalism, under which a warrior nobility owed military obligations to their overlords, was a form of distributed military logistics system made necessary by poor communications and inadequate monetisation. In Anglo-Saxon England, King Ine of Wessex established a form of tax in kind known as the feorm, which allowed troops to be supported without cash purchases.
While on campaign, soldiers in the medieval period (the fifth to fifteenth centuries) in Europe were often responsible for supplying themselves, either through foraging, looting (more common during sieges), or purchases from markets along the campaign route. Even so, military commanders often provided their troops with food and supplies. This might be in lieu of wages if they worked within the king's household, but soldiers would be expected to pay for it from their wages if they did not, at cost or even with a profit.
Some early governments, such as the Carolingians in eighth century, required soldiers to supply their own food for three months, but would feed soldiers thereafter for free if the campaign or siege was ongoing. Later, during the Saxon revolt of 1077–1088, Saxon soldiers were required to bring supplies enough for the entire campaign. Some individual feats of logistics were formidable; after a seven-week campaign English archers shot up to half a million arrows during the battle of Crécy in 1346.
Soldiers were often required to come equipped for campaign with their own armour, shields and weapons. They could often obtain the needed supplies from local craftsmen: smiths, carpenters, and leather workers often supplied the local militia troops with cooking utensils, bows and arrows, and horseshoes and saddles. Archaeologists have found evidence of goods production in excavations of royal houses, suggesting that the Roman infrastructure of central arms and equipment factories was inherited, even if such factories were more decentralised. Estates during Charlemagne's reign were required to have carpenters staffed to produce weapons and armour.
The Vikings focused on seizing sites like monasteries that had large stores of supplies such as grain, cheese, livestock, beer and wine. They were also often located in the heart of agricultural areas with large surpluses stored in warehouses and granaries. This simplified pillaging and foraging. They were also filled with valuable objects, and housed wealthy persons who could be ransomed for substantial sums. However, they still had to take some supplies with them, and their longships were not suited to this, so they also brought merchant ships (knerrir) to carry supplies with them (and to take plunder back). They established bases where supplies could be stored, which allowed them to occasionally field substantial forces and carry out large-scale operations, such as in the Siege of Paris in 885–886.
The Mongols drank horses' blood and milk, and took with them other livestock such as sheep, goats, cattle and sometimes camels. Sheep were the most important of the herd animals, and butter and cheese was produced from their milk, although horse meat was a particular favourite. Livestock could be spared for slaughter only occasionally, but when it was, all parts of the animal were eaten, and the bones were saved to make broth. They supplemented their diet with wild game, and collected various wild vegetables, fruits, berries, fungi and edible seeds. They had collapsible tents that could be quickly erected and struck. They were capable of operating in winter, but depended on their horses, so they needed grasslands where the horses could graze.
Beasts of burden were used as vehicular transport for the food and supplies, either by carrying the supplies directly on their backs—the average medieval horse and mule could carry roughly 100 kilograms—or by pulling carts or wagons, depending on the weather conditions. |
mil_tactics_continued_pretraining.csv | History of military logistics | Sheep were the most important of the herd animals, and butter and cheese was produced from their milk, although horse meat was a particular favourite. Livestock could be spared for slaughter only occasionally, but when it was, all parts of the animal were eaten, and the bones were saved to make broth. They supplemented their diet with wild game, and collected various wild vegetables, fruits, berries, fungi and edible seeds. They had collapsible tents that could be quickly erected and struck. They were capable of operating in winter, but depended on their horses, so they needed grasslands where the horses could graze.
Beasts of burden were used as vehicular transport for the food and supplies, either by carrying the supplies directly on their backs—the average medieval horse and mule could carry roughly 100 kilograms—or by pulling carts or wagons, depending on the weather conditions. A force with 1,000 pack and draft animals required roughly 9,000 kilograms of food for the animals, of which 4,000 kilograms was grain. Other animals had similar needs; donkeys each required about five kilograms of food each day, of which one kilogram had to be grain, while camels required approximately twelve kilograms of food each day, of which five kilograms needed to be grain. Horses were not usually used as draft animals in China or India. In India, oxen were used to carry supplies purchased from the banjaras, mobile merchants who often accompanied armies. Oxen required no grain, but need 20 kilograms of fodder per day, which could be found by grazing, should time and conditions permit. In the Middle East and Central Asia, camels were often used, and in South and South East Asia elephants were used where roads and navigable rivers were uncommon, but there was plentiful water and foliage. This was more difficult in sub-Saharan Africa, where the elephants were less amenable. A herd of 1,000 cattle could feed 14,000 or so men for roughly ten days.
Commanders also made use of water transport throughout the medieval period as it was more efficient than ground transport. Ships made transporting supplies, and often soldiers, easier and more reliable, but the ability to use water transport was limited by location, weather, and the availability of ships. Cargo ships were also used, and were most commonly of the Nordic-type, the Utrecht-type, or the proto-cog craft. River boats resembling simple log-boats were also used. In Sub-Saharan Africa, where there were many lakes, canoes were used. Supply by sea was more economical, but not necessarily simpler than supply by land, due to complicating factors like loading and unloading, stowage, and moving supplies to an army that may not be on the coast.
In Mesoamerica, there were no wheeled vehicles or draft animals that could be used as beasts of burden. The army of the Aztec Empire consisted of units of 8,000 men called xiquipilli. The army was accompanied by porters who carried about 23 kilograms each. It moved slowly, at about 2.4 kilometres per hour or 19 kilometres per day. Since the Aztecs did not build roads outside the major cities, the army moved along tracks used for local trade. Due to the limitations of the tracks, each xiquipilli departed on a different day, and used a different route if possible. Since the army could carry food for no more than eight days, this gave it a combat radius of about 58 kilometres (36 mi) in hostile territory; moving through its own territory the army drew on supplies from tributary towns along the way.
Early modern:
Sixteenth century: Between 1530 and 1710, the size of the armed forces deployed by European states increased by an order of magnitude, to 100,000 or more in some cases, resulting in a corresponding increase in the numbers involved in major battles. There were technical and tactical components to this, like the shift from expensive armoured knights to cheaper pikemen, who could be mobilised in vast numbers, but the major factor was the growth of the European state. Increases in population and wealth generated more revenue through taxation, which could be utilised more effectively due to a series of administrative reforms in the sixteenth century. States now had the means to fund the upkeep and development of roads, which aided the logistical support of forces.
This increase in size came not just in the number of actual soldiers but also camp followers, or tross, — anywhere from half to one and a half times the size of the army itself — and the size of the baggage train — averaging one wagon for every fifteen men. However, little state support was provided to these massive armies, the vast majority of which consisted of mercenaries. Beyond being paid for their service by the state (an act which bankrupted even the Spanish Empire on several occasions), these soldiers and their commanders were forced to provide everything for themselves. If permanently assigned to a town or city with a working marketplace, or travelling along a well-established military route, supplies could be bought locally with intendants overseeing the exchanges. In other cases an army travelling in friendly territory could expect to be followed by sutlers, whose stocks were small and subject to price gouging, or a commissioner could be sent ahead to a town to make arrangements, including billetting if necessary.
Many armies were further restricted to following waterways as supplies they were forced to carry could be more easily transported by water. The Russians made use of the Volga River to support the conquest of Kazan in the Russo-Kazan Wars. Artillery in particular was reliant on this method of transport, since even a modest number of cannons of the period required hundreds of horses to move them and their ammunition, and they travelled at half the speed of the rest of the army. Troops moving down the Spanish Road between 1567 and 1620 were able to travel from Milan and Brussels, a distance of about 1,100 kilometres (700 mi), in five to seven weeks. If an army marched at a leisurely pace of 10 to 13 kilometres (6 to 8 mi) per day, the heavy guns could keep up with little difficulty. Improvements in metal casting techniques and the use of copper-based alloys like bronze and brass made cannons lighter and more durable, and therefore more mobile, but their production and maintenance required skilled craftsmen.
The Ottoman Empire developed a formidable logistical system. The network of Roman and Byzantine roads radiating from Constantinople provided good lines of communication, as did the Danube River, via the Black Sea and the port of Varna. Ottoman troops could march 970 kilometres (600 mi) from Constantinople to Buda via Adrianople and Belgrade in six weeks, drawing provision en route from forty depots. They were fed biscuit, which did not require grinding like grain, and was less likely to spoil in wet weather than flour. This was supplemented by regular issues of mutton. During the siege of Vienna in 1529, heavy rains caused flooding and rendered the roads impassable to the Turks' heavy cannons, and in the Long Turkish War of 1593 to 1606 the Turkish forces in Transylvania were hampered by attacks on their supply ships on the Danube and Tisza Rivers.
Seventeenth century: By the mid-seventeenth century, the French under Secretary of State for War Michel Le Tellier began a series of military reforms to address some of the issues which had plagued armies. Besides ensuring that soldiers were more regularly paid and combating the corruption and inefficiencies of private contractors, he devised formulae to calculate the supplies required for a given campaign, created standardised contracts for dealing with commercial suppliers, and formed a permanent vehicle-park manned by specialists whose role was to carry a few days' supplies while accompanying the army during campaigns. With these arrangements there was a gradual increase in the use of magazines, which provided a more regular flow of supply via convoys. While the concepts of magazines and convoys was not new, prior to the increase in army sizes there had rarely been cause to implement them.
Le Tellier's son, Louvois, continued the reforms after assuming his position. The most important of these reforms was to guarantee free daily rations for the soldiers, amounting to two pounds of bread or hardtack a day. These rations were supplemented as circumstances allowed by a source of protein such as meat or beans; soldiers were still responsible for purchasing these items out-of-pocket but they were often available at below-market prices or even free at the expense of the state. Louvois also made permanent a system of magazines that were overseen by local governors to ensure they were fully stocked. Some of these magazines were dedicated to providing frontier towns and fortresses several months' worth of supplies in the event of a siege, while the rest were supported French armies operating in the field.
When operating in enemy territory an army was forced to plunder the local countryside for supplies, a historical tradition meant to allow war to be conducted at the enemy's expense. However, with the increase in army sizes this reliance on plunder became a major problem, as decisions regarding where and when an army could move or fight were made based not on strategic objectives but whether a given area was capable of supporting the soldiers' needs. Sieges in particular were affected by this, both for an army attempting to lay siege to a location and for one coming to its relief. Unless a commander was able to implement some sort of regular resupply, a fortress or town with a devastated countryside could be effectively immune to either operation. |
mil_tactics_continued_pretraining.csv | History of military logistics | Some of these magazines were dedicated to providing frontier towns and fortresses several months' worth of supplies in the event of a siege, while the rest were supported French armies operating in the field.
When operating in enemy territory an army was forced to plunder the local countryside for supplies, a historical tradition meant to allow war to be conducted at the enemy's expense. However, with the increase in army sizes this reliance on plunder became a major problem, as decisions regarding where and when an army could move or fight were made based not on strategic objectives but whether a given area was capable of supporting the soldiers' needs. Sieges in particular were affected by this, both for an army attempting to lay siege to a location and for one coming to its relief. Unless a commander was able to implement some sort of regular resupply, a fortress or town with a devastated countryside could be effectively immune to either operation. Mons could not be besieged in 1684 because of a lack of forage in the area. For the later French siege of Mons in the Spanish Netherlands in 1691, during the Nine Years' War, Louvois purchased 900,000 rations of fodder the year before.
Although living off the land theoretically granted armies freedom of movement, it required careful planning, and the need for plunder precluded any sort of sustained, purposeful advance. Bread was a particular problem, as providing it locally was limited by the availability of mills, ovens and bakers. An army of 60,000 might require 90,000 rations once camp followers were included, and at 0.68 kilograms (1.5 lb) of bread per ration that would require 61 tonnes (135,000 lb) of bread per day. Armies normally marched for three days and rested on the fourth. A supply of bread for 60,000 men for four days required at least sixty ovens operated by 240 bakers. To build an oven required 500 two-kilogram bricks, so sixty ovens required sixty cartloads of bricks. In addition, a month's supply of fuel for the sixty ovens needed 1,400 cartloads. Local mills were targets for enemy action, so handmills were often necessary.
Recourse therefore had to be made to bringing up supplies from the bases. Fortresses not only guarded lines of communication they served as supply bases. In 1675, a French army 80,000 strong was supported for two months by the grain stored at Maastricht and Liège. The indecisiveness of campaigns of the period was largely the result of the difficulty involved in supplying large armies. The larger armies of the seventeenth century also saw the advent of military uniforms, which were introduced in Britain with the New Model Army in the English Civil War. Clothing contracts became centralised but funds were disbursed through regiments, which developed distinctive dress. Government payments were often in arrears, sometimes by years, and stripping the dead for their clothing became a common practice.
Eighteenth century: In 1704, the Duke of Marlborough marched his army from the Netherlands to the Danube, following the Rhine and Neckar rivers. He was able to do so because he was moving through rich country and his Quartermaster General, Colonel William Cadogan, paid for supplies in gold at fair prices, so that the local population were willing to sell, and brought supplies to collection points. This was arranged through a contract let to Sir Solomon de Medina to purchase supplies through local agents. The 400-kilometre (250 mi) march wore out boots but these too were provided. The result was that the army arrived in good condition and ready to fight the Battle of Blenheim.
In contrast, Marlborough's opponent, Marshal Tallard, was placed at a logistical disadvantage, having to advance without prepositioned supplies. Usually a population regarded the presence of an army, whether friendly or not, as a disaster and hoped that it would go away as soon as possible. Europe lacked a network of good roads, and rains or snow melt could turn unmade roads into quagmires. Bridges were infrequent, and wooden bridges were easy to destroy. Most were crossed only by ferry. Rivers could become unnavigable if the water level rose or fell too much.
The Chinese likewise were able to tap into the rich agricultural resources of eastern China to support campaigns against far-flung adversaries. The Kangxi Emperor drove the Russians from the Amur river region, and besieged the Russian fortress at Albazin. The Treaty of Nerchinsk allocated the region to China. In the Dzungar–Qing Wars, the emperor was able to mount an expedition across the Gobi Desert to defeat the Dzungar in the Battle of Jao Modo, but subsequent expeditions to Tibet in 1717 and 1718 were frustrated by logistical difficulties and ran out of food before a more successful expedition in 1720.
In the American Revolutionary War, the Americans had a young population with large numbers of potential soldiers, and an agricultural economy with surplus foodstuffs and no vital centres. Clothing and footwear could be supplied by domestic production, there was widespread ownership of firearms, and a shipping industry experienced in smuggling that could supply other needs. What they lacked was land transportation infrastructure — roads, waterways, wagons, animals and skilled personnel — needed for the distribution of supplies. This hampered the creation and maintenance of forces sufficiently large to drive out the British.
After the war the British created the infrastructure and gained the experience needed to manage an empire. They reorganised the management of the supply of military food and transport, which was completed in 1793–1794 when the naval Victualling and Transport Boards undertook those responsibilities. It built upon experience learned from the supply of the very-long-distance Falkland Islands garrison from 1767 to 1772 to systematise needed shipments to distant places such as Australia, Nova Scotia, and Sierra Leone.
This new infrastructure allowed Britain to launch large expeditions to the continent during the Revolutionary and Napoleonic Wars and to develop a global network of garrisons in the colonies. They were not always successful; British setbacks in the Kandyan Wars in Sri Lanka were partly attributable to logistical difficulties, although disease and terrain were also factors, and the British were defeated by the Ashanti Empire in the Battle of Nsamankow in 1824 when they ran out of ammunition.
Nineteenth century:
Napoleonic wars: Napoleon made logistical operations a major part of French strategy. He dispersed his corps along a broad front to maximise the area from which supplies could be drawn. Each day forage parties brought in supplies. This differed from earlier operations living off the land in the size of the forces involved, and because the primary motivation was the emperor's desire for mobility. Crucially, the army did not degenerate into an armed mob. Ammunition could not as a rule be obtained locally, so Napoleon allotted 2,500 of his 4,500 wagons to carrying artillery ammunition, with the rest hauling rations. Each man carried 60 to 80 rounds in his pack, and each division carried 97,000 rounds in reserve. Thus, like earlier armies, the Grande Armée took with it sufficient ammunition for the whole campaign. Support troops accompanied each unit. A British Royal Horse Artillery troop in 1813 was authorised to have a farrier, a carriage smith, two shoeing smiths, two collar makers and a wheelwright.
During the Ulm Campaign in 1805, the French army of 200,000 men had no need for time-consuming efforts to scour the countryside for supplies and live off the land, as it was well provided for by France's German allies. France's ally, the Electorate of Bavaria, turned the city of Augsburg into a gigantic supply centre, allowing the Grande Armée, generously replenished with food, shoes and ammunition, to quickly invade Austria after the decisive French victory at Ulm.
Napoleon left nothing to chance, requesting the Bavarians to prepare in advance a specified amount of food at certain cities such as Würzburg and Ulm, for which the French reimbursed them. When French demands proved excessive for the German principalities, the French army used a system of vouchers to requisition supplies and keep the rapid advance going. The agreements with allies permitted the French to obtain huge quantities of supplies within a few days' notice. Napoleon built up a major supply magazine at Passau, with barges transporting supplies down the Danube to Vienna to maintain the French army prior to the Battle of Austerlitz in combat readiness.
The French system fared poorly in the Peninsular War in the face of Spanish guerrilla warfare that targeted their supply lines and the British blockade of French-occupied ports on the Iberian Peninsula. The need to supply a besieged Barcelona made it impossible to control the province and ended French plans to incorporate Catalonia into Napoleon's Empire. Wellington blocked the French advance into Portugal with a series of fortifications, the Lines of Torres Vedras, and devastated the area north of the lines to make it difficult for the French to mass forces there to assault or besiege the fortifications.
A more spectacular logistical failure occurred in the Russian campaign in 1812. Carl von Clausewitz noted:The second crisis most commonly occurs at the end of a victorious campaign when the lines of communication have begun to be overstretched. This is especially true when the war is conducted in an impoverished, thinly populated and possibly hostile country. |
mil_tactics_continued_pretraining.csv | History of military logistics | The French system fared poorly in the Peninsular War in the face of Spanish guerrilla warfare that targeted their supply lines and the British blockade of French-occupied ports on the Iberian Peninsula. The need to supply a besieged Barcelona made it impossible to control the province and ended French plans to incorporate Catalonia into Napoleon's Empire. Wellington blocked the French advance into Portugal with a series of fortifications, the Lines of Torres Vedras, and devastated the area north of the lines to make it difficult for the French to mass forces there to assault or besiege the fortifications.
A more spectacular logistical failure occurred in the Russian campaign in 1812. Carl von Clausewitz noted:The second crisis most commonly occurs at the end of a victorious campaign when the lines of communication have begun to be overstretched. This is especially true when the war is conducted in an impoverished, thinly populated and possibly hostile country. How vast a difference there is between a supply line stretching from Vilna to Moscow, where every wagon has to be procured by force, and a line from Cologne to Paris, via Liége, Louvain, Brussels, Mons, Valenciennes and Cambrai, where a commercial transaction, a bill of exchange, is enough to produce millions of rations!
Medical logistics: Disease had been the greatest enemy of the soldier. Invading armies sometimes introduced diseases. Wars often created conditions for diseases to flourish through crowding, social disruption and damage to infrastructure. Crowded army camps were always susceptible to diseases. In the eighteenth century, physicians like George Cleghorn, Richard Brocklesby and René-Nicolas Dufriche Desgenettes called for improvements in military hygiene, as did John Pringle, who wrote a treatise on military medicine in 1752, Observations on the Diseases of the Army in Camp and Garrison, in which he argued that disease was caused by bad air and overcrowding.
James Lind published a Treatise of the Scurvy in 1753 in which he advocated the consumption of fresh fruit and lemon juice to treat scurvy, a common illness among sailors on long voyages. Of the 175,990 sailors recruited by the Royal Navy between 1774 and 1780, 18,545 died of disease, mainly scurvy, and 1,243 were killed. Between 1794 and 1813, with the adoption of a lemon juice ration, the navy's sick rate fell from 1 in 4 to 1 in 10.75 and the death rate from 1 in 86 to 1 in 143. Lind also advocated the consumption of the bark of cinchona trees to prevent malaria, something that had previously been recommended by Thomas Sydenham in 1676. The active ingredient was extracted and isolated in 1820 by Pierre-Joseph Pelletier and Joseph Bienaimé Caventou, who named it "quinine".
The British Walcheren Campaign of 1809 was particularly notable in that less than 800 men died in battle, but forty percent of the force of 40,000 contracted diseases, probably malaria, typhoid or typhus; 60 officers and 3,900 men died, and some 11,000 men were still ill six months later. It is estimated that of the 240,000 British soldiers and sailors who died in all theatres in the Napoleonic wars, less than 30,000 died from wounds.
Later nineteenth century: The nineteenth century saw technological developments that facilitated immense improvements to the storage, handling and transportation of supplies. Salting, drying and smoking had long been used to delay food spoilage, but in 1809 Nicolas Appert invented a process of heat sterilisation and airtight bottling for food preservation on an industrial scale. Why it worked would not be explained until Louis Pasteur's ground breaking research in 1864, but the process was swiftly and widely adopted. Appert used glass because the quality of French tinplate was poor, but good quality tinplate was widely available in the UK. Philippe de Girard in France suggested its use to Peter Durand in England, who took out a patent on the process in 1810, which he sold to industrialist Bryan Donkin in 1812 for £1,000 (equivalent to £84,000 in 2023). The British Admiralty placed substantial orders for meat preserved in tin cans in 1814. Canning remained a manual process for many years until Max Ams invented the double seam for cans in 1896, making it possible to use an automated process to fill and close them. The use of cans simplified storage and distribution of foods, and reduced waste and the incidence of food-related illness.
A practical mechanical refrigeration process was developed in Australia by James Harrison and patented in the UK by him in 1856, and by the 1880s reefer ships were plying the oceans. Richard Trevithick developed the first high-pressure steam engine in 1801 and the first working railway steam locomotive in 1804. Steam power had great advantages for vessels that plied rivers, where twists and turns meant changes of course but the narrow confines of the river made it difficult to tack. Wood and coal could be obtained along the river, whereas ocean-going vessels had no such opportunity, and therefore continued to carry sails even when they had engines.
By reducing the dependence on the wind, the steam engine made shipping faster and more reliable. To allow their warships to operate around the world, the British built a global network of coaling stations. To reduce its dependence on British colliers, the United States Navy began to move to oil in 1913. For the British, this was a more painful process, as it produced coal but not oil domestically.
The first to realise the potential of rail were the Russians, who moved a force of 14,500 men from Uherské Hradiště to Kraków by rail in 1846. During the American Civil War, railways were used extensively for the transport of personnel, supplies, horses and mules, and artillery pieces. While railways were a more economic form of transport than animal-drawn carts and wagons, they were limited to tracks, and therefore could not support an advancing army unless its advance was along existing railway lines. The large armies of the American Civil also made great use of riverboats and coastal shipping, which were not so easy to damage or interdict.
During the Austro-Prussian War of 1866, railways enabled the swift mobilisation of the Prussian Army, but the problem of moving supplies from the end of rail lines to units at the front resulted in nearly 18,000 tons trapped on trains unable to be unloaded to ground transport. During the Crimean War, the British built the first military railway, one specifically for supporting armies in the field, to support the siege of Sevastopol. The Prussian use of railways during the Franco-Prussian War is often cited as an example of logistic modernisations, but the advantages of manoeuvre were often gained by abandoning supply lines that became hopelessly congested with rear-area traffic. The Canadian government moved 4,000 troops and their supplies over the Canadian Pacific Railway to suppress the North-West Rebellion in 1885, and the Russians moved 370,000 troops along the incomplete Trans-Siberian Railway for the Russo-Japanese War in 1904.
Twentieth and twenty-first centuries:
First World War: Between 1870 and 1914, the population of Europe grew from 293 million to 490 million. The expansion of armies and navies was even more rapid. With the spread of military conscription and reserve systems in the decades leading up to the 20th century, the potential size of armies increased substantially. France mobilised 570,000 troops for the Franco-Prussian War and over three million on the outbreak of the First World War. The advent of industrial warfare in the form of bolt-action rifles, machine guns and quick-firing artillery sent ammunition consumption soaring. In the Franco-Prussian War, each German gun fired 199 shells on average but in 1914 the German stock of 1,000 rounds per gun were exhausted in the first month and a half of fighting.
In earlier wars, most artillery pieces lasted for the duration of the campaign, but now counter-battery fire was capable of destroying them. Strenuous efforts were made to step up production but constant firing led to wear and tear on the guns. The factories prioritised production of new guns over spare parts, which became scarce. Quality suffered in the haste to produce more and there were serious problems with guns and ammunition. In 1915, as many as 25 per cent of the rounds in a batch might be defective. The shortage of ammunition created a political crisis in the UK, the Shell Crisis of 1915, which led to the formation of a new coalition government.
As munitions production increased, transport became the major bottleneck. Military logistical systems continued to rely on nineteenth-century technology. The British shipped 5,337,841 tonnes (5,253,538 long tons) of ammunition to France and 5,525,875 tonnes (5,438,602 long tons) of hay and oats to feed the animals. When the war began, the rail and horse-drawn supply were stretched to their limits. Where the stalemate of trench warfare took hold, narrow gauge trench railways were built to extend the rail network to the front lines. The great size of the German Army proved too much for its railways to support except while immobile. |
mil_tactics_continued_pretraining.csv | History of military logistics | In 1915, as many as 25 per cent of the rounds in a batch might be defective. The shortage of ammunition created a political crisis in the UK, the Shell Crisis of 1915, which led to the formation of a new coalition government.
As munitions production increased, transport became the major bottleneck. Military logistical systems continued to rely on nineteenth-century technology. The British shipped 5,337,841 tonnes (5,253,538 long tons) of ammunition to France and 5,525,875 tonnes (5,438,602 long tons) of hay and oats to feed the animals. When the war began, the rail and horse-drawn supply were stretched to their limits. Where the stalemate of trench warfare took hold, narrow gauge trench railways were built to extend the rail network to the front lines. The great size of the German Army proved too much for its railways to support except while immobile. From the beginning of the Battle of the Somme on 24 June to 23 July 1916, 150,000 tonnes (148,000 long tons) of ammunition had been fired but only 103,404 tonnes (101,771 long tons) were landed, the difference being made up by depleting stockpiles. The capacity of the six channel ports that handled 96 per cent of the British Expeditionary Force’s requirements was increased and additional locomotives and rolling stock were imported. Between 1914 and 1918, the French laid between 5,000 and 6,000 kilometres of new track.
On the Western Front, supplies moved from the ports by rail or barge to regulating points where they were sorted before being forwarded. The supply system might be described as "semiautomatic". Certain supplies for which demand was invariant, such as fodder and rations, were sent daily without requisition in division-sized "packs" consisting of two wagons of bread, two of groceries, one of meat, four of hay, five of oats and one of petrol, a total of 15 wagons. Each pack was earmarked for a particular division and would be delivered to its own railhead. Supplies for which there was variable demand, such as reinforcements, remounts, ammunition and engineering stores, had to be indented, and were sent by the railway carload. A typical train would consist of forty wagons, two packs and ten other wagons. Each division drew its supplies from one railhead, although it might share it with other divisions.
The advent of motor vehicles powered by internal combustion engines offered an alternative to animal transport for moving supplies forward of the railhead. Though they generally require better roads and bridges, they were much faster and more efficient than animal transport. Compared with railways they had limited cargo capacity, and created logistical problems of their own with their need for fuel and spare parts. At one point the French used 11,200 trucks to move 100,000 men over 100 miles (160 km) at short notice. By 1918, the French had 90,000 motor vehicles, while the Germans had 40,000.
The movement of supplies posed greater problems on the Eastern Front, where the transportation system was less developed than in the west. The Russian economy was less developed and less efficient than that of Germany, and food and ammunition shortages developed in 1915. In turn, Russia was more developed industrially than Turkey, which nonetheless managed to last longer than Russia. This was partly because after the Gallipoli campaign the forces the British brought to bear in the Sinai and Palestine campaign and Mesopotamian campaign were in relatively remote areas of the Turkish Empire where the well-resourced British forces had to overcome supply and transportation problems to bring their power to bear. The British were able to use motor vehicles in the invasion of Darfur but in sub-Saharan Africa they were heavily reliant on human porters.
The British blockade of Germany kept a stranglehold on raw materials, goods, and food needed to support the Germany war effort, and is considered one of the key elements in the eventual Allied victory in the war. This form of economic warfare also involved pressure on neutral countries not to export or re-export to Germany and a program of pre-emptive purchasing. The Germans attempted to exploit occupied countries like Romania and Ukraine for oil, grain and other resources. Although the Allies controlled most of the world's shipping, Germany's unrestricted submarine warfare showed the vulnerability of merchant shipping despite Allied naval superiority. The United Kingdom was particularly vulnerable to economic blockade, as it did not produce enough food to feed itself, importing nearly two-thirds of its food. Coal and food had to be rationed.
In 1912, the biochemist, Casimir Funk, theorised that beriberi, scurvy and rickets were all diseases caused by nutrient deficiencies, naming the missing nutrient chemicals "vitamines" and over the following decades, biochemists were able to isolate them. During the First World War, the troops in the Gallipoli campaign suffered from beriberi and scurvy because the British Army's ration was deficient in these vitamins. Unlike their counterparts on the Western Front, the troops were unable to supplement their diet with local produce. A type of scurvy in the form of septic sores known as "Barcoo rot" appeared among the Australian Light Horse in the Sinai campaign, for the same reason.
Second World War: The mechanisation of warfare that started in the First World War, added the maintenance needs of military aircraft, tanks and other combat vehicles to the burden on military logistics. Many nations, including Germany, continued to rely on horse-drawn transport. Trucks were expensive to produce and their production put additional strain on scarce resources such as rubber, steel and petroleum. Petroleum was a particular problem, as the world's major sources were under the control of the Netherlands, United Kingdom, United States and the Soviet Union. Efforts were made to step up the production of synthetic fuels and rubber, but the supply of these posed difficulties throughout the war, and they came under Allied air attack. Germany's motor vehicle industry was not well developed either. It therefore made sense to continue to rely on horse-drawn transport. In 1939 a German infantry division had 942 motor vehicles and 1,200 horse-drawn carts. Even this was hard to meet, and large numbers of civilian and captured British and French vehicles were employed. The multiplicity of types created problems with spare parts.
The forces of the United States and United Kingdom were fully mechanised, although the British and Americans used mules in North Africa, Italy and Burma. The British and Japanese also used elephants in Burma. In the South West Pacific, human porters were used. There was little civilian demand for four-wheel drive vehicles, which were more expensive than regular vehicles, so commercial firms saw little benefit in producing them. All armies entered the war with large numbers of two-wheel drive vehicles. The need for four-wheel drive soon became apparent, especially in the less developed parts of the world, and considerable manpower and materiel had to be devoted to road making and maintenance. Similarly, the American automotive industry had scant interest in heavy trucks for long-distance hauls; the US Interstate Highway System had not yet been built, and interstate commerce was the province of rail and water transport. The US Army gave a low priority to such vehicles until the need became acute.
The increased technological and administrative complexity was reflected in the proliferation of staff and paperwork. In the United States, the Army Service Forces inventoried 200,000 paper forms and eliminated 125,000 of them. Professional analysis and simplification of common procedures was undertaken using industrial engineering techniques developed by industry.
The German invasion of the Soviet Union in 1941 faced logistical failure when the Soviet Union did not collapse after the initial frontier battles. The summer invasion meant that fodder was available for the 625,000 horses amassed for the operation, but stocks of food were low, and their seizure alienated the local population. An invasion later in the year would have avoided this, but left less time for operations before the winter set in. The distances involved, the speed of the advance, and the poor road network all contributed to the logistical difficulties, and shortages of spare parts developed for motor vehicles, which were in short supply in the first place. The bridges over the Dnieper were demolished by the retreating Soviets, and use of the railway system was hampered by the different track gauge used in the Soviet Union. Transportation difficulties made it difficult to distribute stores like winter clothing. In 1942 the German forces in the Soviet Union began to integrate the materiel and manpower resources of the occupied regions into the German war effort.
Motor vehicles ran on tyres, but the supply of rubber to the Allies of World War II was curtailed when the Japanese overran the major sources of natural rubber. Imports to the United States dropped from 910,000 tonnes (900,000 long tons) in 1941 to 11,000 tonnes (11,000 long tons) in 1942. Fuel rationing and recycling measures were introduced to conserve tyres. The synthetic rubber industry in the United States grew from producing 8,400 tonnes (8,300 long tons) in 1939 to 810,000 tonnes (800,000 long tons) in 1944. Germany produced synthetic rubber and oil, much of it with slave labour at the IG Farben plant at Auschwitz .
The Japanese also captured the major sources of quinine. Malaria was a major medical and military problem in many theatres of war. |
mil_tactics_continued_pretraining.csv | History of military logistics | Motor vehicles ran on tyres, but the supply of rubber to the Allies of World War II was curtailed when the Japanese overran the major sources of natural rubber. Imports to the United States dropped from 910,000 tonnes (900,000 long tons) in 1941 to 11,000 tonnes (11,000 long tons) in 1942. Fuel rationing and recycling measures were introduced to conserve tyres. The synthetic rubber industry in the United States grew from producing 8,400 tonnes (8,300 long tons) in 1939 to 810,000 tonnes (800,000 long tons) in 1944. Germany produced synthetic rubber and oil, much of it with slave labour at the IG Farben plant at Auschwitz .
The Japanese also captured the major sources of quinine. Malaria was a major medical and military problem in many theatres of war. The US Marines in the Guadalcanal campaign had 5,000 hospital admissions for malaria among a force of 16,000 after two months on Guadalcanal, while the Australian force at Milne Bay reported over 5,000 cases of malaria among the force of 12,000 in November 1942. Neil Hamilton Fairley persuaded the UK and US authorities to produce atebrin and plasmoquine, antimalarial drugs that had been developed in Germany in the 1920s and 1930s. The development of penicillin by Howard Florey and his team was a significant advance in the treatment of wounds with antibiotics. During the campaign in Western Europe in 1944–1945, penicillin was widely used both to treat infected wounds and as a prophylactic to prevent wounds from becoming infected. Gas gangrene had killed 150 out of every 1,000 casualties in the First World War, but the instance of this disease now disappeared almost completely. Open fractures now had a recovery rate of better than 94 per cent, and recovery from burns of one-fifth of the body or less was 100 per cent.
In the North African campaign, the Italians struggled to supply their forces through the inadequate ports in Libya, while the British had access to the Suez. In the Siege of Tobruk, destroyers were used to resupply the garrison, as freighters were too vulnerable to air attack. At the same time, retention of the port stretched the German and Italian supply lines, making offensive action into Egypt more difficult. Resupplying the garrison of Malta was even more hazardous, requiring major operations, as were the Arctic convoys that brought aid to the Soviet Union, so much so that they had to be suspended in July and August 1942. Safer routes were developed through Iran and Siberia, and the Black Sea after it was reopened in 1945.
The North African campaign saw the widespread adoption of the 20 litre jerry can, a German invention that was copied by the British and Americans. The jerry can had convenient carrying handles and could be stacked. It did not shift or roll in storage, and floated in water when filled with petrol. The British version was an exact copy of the German model; the American version, called an Ameri-can by the British, was slightly smaller, with a screw cap onto which a nozzle could be fitted. It weighed 4.5 kilograms (10 lb) empty, and 18 kilograms (40 lb) when filled with petrol, so 56 filled cans weighed 1.0 tonne (1 long ton). Some 11.5 million jerry cans were provided for Operation Overlord. Of these, 10.5 million were manufactured in the UK and supplied to the US Army under Reverse Lend-Lease, while the rest came from the US.
To facilitate amphibious operations in Europe and the Pacific, the Allies developed an assortment of special vessels. There were attack transports (APA) and amphibious cargo ships (AKA), and ocean-going landing ships, most notably the landing craft, infantry (LCI), landing ship, tank (LST) that could carry tanks and trucks and land them on a beach, and the landing ship, dock (LSD), a floating dry dock that could transport landing craft and amphibious vehicles. These came in many forms, from the small landing craft, vehicle, personnel (LCVP), to the larger landing craft, mechanised (LCM) and landing craft, tank (LCT). Amphibious vehicles included the DUKW, an amphibious truck, and the Landing Vehicle Tracked (LVT).
The development of amphibious craft allowed the Allies to land in Normandy without having to quickly seize a heavily defended port. After their victory in the Battle of Normandy, the advance came to a halt in September 1944. This was not a result of inadequate supplies or port capacity – there were still some 600,000 long tons (610,000 t) of supplies stockpiled in the Normandy lodgment area in November – nor solely by a shortage of fuel. Rather, the problem was the inability to deliver fuel and supplies to the armies. Railways could not be repaired and pipelines could not be constructed quickly enough. Motor transport was used as a stopgap, but insufficient numbers of heavy trucks compelled the Army to use the smaller general purpose 2½-ton 6×6 trucks for long hauls, for which they were unsuited. The Red Ball Express was a success, but at a cost: overloading, careless driving, lack of proper vehicle maintenance, and wear and tear took their toll on the truck fleet. In the long run it was the railways that carried most of the tonnage.
Between September and November 1944, the American forces in the European Theater of Operations (ETO) were beset by severe difficulties with port discharge capacity and inland transportation infrastructure that only eased with the opening of the port of Antwerp in November. The Germans strongly defended the ports and destroyed their facilities. The shipping crisis in Europe escalated into a global one. The Allied merchant fleet was still growing at a rate of 510,000 deadweight tonnes (500,000 deadweight tons) per month, but the number of ships available for loading at US ports was shrinking due to the retention of vessels by the theatres. This represented 7,100,000 deadweight tonnes (7,000,000 deadweight tons) of shipping, which was about 30 percent of the total Allied-controlled tonnage. When ships failed to return from overseas on time, supplies piled up at the ports, depots and railway sidings in the United States.
Another wartime development was air transport, which provided an alternative to land and sea transport, but with limited tonnage and at high cost. The Germans used air transport to reinforce Tunisia after the Allies landed in North West Africa. Soon after the Germans attempted to supply the surrounded Sixth Army during the Battle of Stalingrad, but failed due to insufficient aircraft to fulfil the mission. The Allies were more successful; in the Burma Campaign, aircraft supplied the Chindits and the cut off Allied units in the Battle of Imphal. An airlift over "the Hump" was used to resupply the Chinese war effort. After the war the 1948 Berlin Air Lift was successful in supplying the whole non-Soviet half of the city.
Long distances dominated the Pacific War. For the attack on Pearl Harbor, the Japanese used oilers to refuel the attacking fleet at sea en route. The ability of the Japanese navy to conduct refuelling and replenishment at sea allowed it to conduct wide-ranging operations in the Pacific and Indian Oceans in the first months of 1942. In 1944, the United States Navy created service squadrons of support ships to enable the Pacific Fleet to remain at sea longer and support fast-paced operations against a succession of Japanese-held islands.
In November 1943, the Pacific Ocean Areas instituted a form of automatic supply, whereby troops and supplies were sent according to a pre-arranged schedule in a series of echelons. Shipping was held at control points to avoid congestion in forward areas, which also minimised the time when ships were most exposed to enemy attack. While wasteful in some respects, the procedure allowed for mounting of operations from widely scattered ports, avoided shipping congestion and long turnaround times, and eliminated the duplication of Army and Navy supplies. The South West Pacific Area adopted one of its key features, the block loading of ships for a particular destination.
As in Europe, there was a shipping crisis in the South West Pacific in late 1944, and for the same reason: a lack of port capacity. To ease the strain on shipping resources, the American forces made use of local procurement. While American Lend Lease aid to Australia was only 3.3% of aid to all countries, Australian Reverse Lend Lease represented 13.0% of aid to the United States. Bypassed Japanese forces in the South West Pacific Area were expected to "wither on the vine" and starve, but this did not occur; they cultivated gardens using local labour seeds and equipment imported by aircraft and submarines, which also brought in ordnance and medical supplies. They remained strong, well-organised and capable of offensive action. Australian forces conducted a series of offensives against them, which were targeted at their gardens and supplies.
Post-Second World War: Helicopters were used by the United States in the Korean War to deliver supplies. Although much slower than fixed-wing aircraft, they could move supplies rapidly over terrain that could deliver supplies in minutes to a forward area that could take hours to reach overland. |
mil_tactics_continued_pretraining.csv | History of military logistics | While American Lend Lease aid to Australia was only 3.3% of aid to all countries, Australian Reverse Lend Lease represented 13.0% of aid to the United States. Bypassed Japanese forces in the South West Pacific Area were expected to "wither on the vine" and starve, but this did not occur; they cultivated gardens using local labour seeds and equipment imported by aircraft and submarines, which also brought in ordnance and medical supplies. They remained strong, well-organised and capable of offensive action. Australian forces conducted a series of offensives against them, which were targeted at their gardens and supplies.
Post-Second World War: Helicopters were used by the United States in the Korean War to deliver supplies. Although much slower than fixed-wing aircraft, they could move supplies rapidly over terrain that could deliver supplies in minutes to a forward area that could take hours to reach overland. While still affected by the weather, they could fly when other aircraft were grounded. They also became important for the rapid evacuation of casualties. They were used by the French in the First Indochina War, and the Algerian War, where they handled most of the tactical troop movement and casualty evacuation, and much of the logistical support.
In the Vietnam War, the U.S. Army operated a fleet of large Boeing CH-47 Chinook and Sikorsky CH-54 Tarhe helicopters. Using a technique whereby supplies in a cargo net were slung under a helicopter, a CH-47 could move a hundred tonnes of supplies within a 16-kilometre (10 mi) radius in a single day. These helicopters were also used to recover 10,000 crashed aircraft. Notably, in these conflicts victory did not always go to the side with the best logistics.
The war in Vietnam also saw the large-scale employment of containerisation. A standard steel container was designed called the Conex box that was capable of holding 4,100 kilograms (9,000 lb) and suitable for loading onto a semi-trailer or railway flat car. Eventually 150,000 of them were sent to Vietnam. The use of containers reduced port congestion and handling time, and saved money on packaging. There was less damage to cargo in transit and reduced loss through pilferage. The containers could be used in lieu of covered storage. The drawback of using containers was that they required special equipment to handle them. This became less of a problem as containerisation spread through the world, but in 1999 the International Force East Timor (INTERFET) found that East Timor had no facilities for handling containers, and special container handling cranes had to be designed and manufactured in New Zealand.
The development of large cargo-carrying aircraft enhanced the ability of airlift to move personnel and supplies over long distances. It remains uneconomical compared with sealift, so sealift is still the preferred means of transport for cargo, particularly heavy and bulky items. Nonetheless, during the Yom Kippur War, as part of Operation Nickel Grass, American Lockheed C-141 Starlifter and Lockheed C-5 Galaxy aircraft delivered 22,683 tonnes (22,325 long tons) of supplies for Israeli forces including 29 tanks, although only four arrived before the ceasefire on 22 October 1973; just 39 per cent of the Nickel Grass materiel was delivered by then. Another 22,683 tonnes (22,325 long tons) was sent by sea, none of which arrived before the ceasefire.
During the initial stage of the Gulf War from 7 August to 8 November 1991, some 187,000 U.S. troops were deployed to Saudi Arabia, 99.22 per cent of them by air. Airlift also accounted for 15.3 per cent of the cargo, some 161,804 tonnes (178,358 short tons). The second phase of deployment, from 8 November 1991 to 16 January 1992, involved the movement of 391,604 troops by air, the majority of whom travelled on commercial flights, and 326,223 tonnes (359,599 short tons) of cargo, representing 14.5 per cent of the total. The seaports of Saudi Arabia were world class, much better than their counterparts in the United States, with 60 piers, of which the U.S. forces used 15. Sealift carried 85.5 per cent of the dry cargo and 1,869,990 tonnes (2,061,310 short tons) of petroleum products in this phase.
As the twentieth century drew to a close, the increasing complexity of new weapons systems became a concern. While new technologies were intended to make armies more lethal and less reliant on manpower, they did not always live up to their promise. In the 1982 Falklands War, the logistical implications of the Rapier missile launchers were not initially appreciated. Generally located on hilltops where there were no roads or tracks, they had to be sited by helicopter. If they had to be moved, whether yards or miles, another helicopter sortie was called for. They required fuel to keep their generators running, and their isolated sites required the full-time service of a Westland Sea King helicopter, itself a voracious consumer of fuel, to keep them going.
The increasing complexity of weapons and equipment saw the proportion of personnel devoted to logistics in the US Army rise from 39 per cent in the American Expeditionary Forces in the First World War to 45 per cent in the ETO in the Second World War, but declined to 42 per cent in the Korean War, and 35 per cent in the Vietnam War. Concerns about the low tooth-to-tail ratio saw a mandated ratio put in place, but the widespread use of civilian contractors saw the proportion of people devoted to logistical functions rise to 55 per cent in 2005 during the Iraq War.
Complex systems like the M1 Abrams tank require more knowledge and more skilled personnel to operate, maintain and repair, and resist easy modification. The M1 required three times the fuel of the older M60 tank, and 20 per cent more spare parts. When committed to action in the Gulf War, many Abrams tank crews exhausted their stock of spare parts, which could have become a serious problem had the fighting lasted more than 100 hours. On the other hand, 300,000 rounds of artillery, antiaircraft and tank ammunition were shipped only to be returned, largely owing to the greater lethality of modern weapons lowering ammunition consumption rates. The high fuel usage led to reconsideration of proposals to use a diesel engine instead.
The diversity of equipment and consequent large number of spare parts stocked by the NATO saw attempts at standardisation. By the 21st century, there were over 1,000 NATO standardization agreements, covering everything from ammunition calibres to rail gauges and the terminology that troops use to communicate with each other. The adoption of standardisation as policy promised benefits through reducing inventories, allowing alliance partners to draw on each other's stockpiles and repair services, reducing support overheads, and lower costs through consolidation of research and development and the economies of scale of larger production runs. Most countries had no choice, as they lacked the industry and technology to manufacture complex modern weapons systems. However the adoption of foreign weapons also meant the adoption of foreign tactics, and giving up the advantages of bespoke systems tailored to the nation's own, often unique, strategic environment.
The management of spare parts became a major concern. When items were produced, it was not known how many of each spare part would be needed. Failure to estimate correctly meant inventories of spare parts that were never needed and shortages of others. Keeping old equipment rather than buying new seemed a sensible option, and sometimes the only one, for many armies, but the cost of keeping old vehicles and equipment running could also become uneconomical if not prohibitive.
In the late 20th century, the number of natural disasters increased from 50 a year in 1960 to 350 a year in 2010. While not their primary role in most countries, national and international military forces were increasingly engaged in such activities since they possessed the manpower, equipment and organization to deal with them. Up to 80 per cent of the total spent on disaster relief activities involved logistics operations, of which more than 40 per cent was wasted through duplication, lack of time to carry out adequate planning, and other factors.
Although military logistics was an older discipline than its business counterpart, in the twenty-first century the adoption of new tools, techniques and technologies saw the latter overtake the former. Techniques were imported to military logistics that had been developed in the business world, such as just-in-time manufacturing. This greatly reduced the costs involved in storage and handling of items, but in the combat environment of the Iraq War, the drawbacks became all too clear when suppliers and transport resources could not respond to rapidly changing patterns of demand. Shortages developed, and units responded by reverting to traditional just-in-case logistics, stockpiling items that they thought they might need.
The Russian invasion of Ukraine in 2022 encountered severe logistical difficulties due to poor planning, notably a failure to anticipate the degree of resistance that was actually encountered. The logistical resources required were not on hand even though the capability existed. As equipment broke down through use and battle damage, a shortage of spare parts developed, which was compounded by inadequate numbers of trained maintenance personnel. Although Russia was the world's second largest producer of armaments, its industrial base still struggled to replace materiel losses incurred in high-intensity combat. |
mil_tactics_continued_pretraining.csv | History of military logistics | This greatly reduced the costs involved in storage and handling of items, but in the combat environment of the Iraq War, the drawbacks became all too clear when suppliers and transport resources could not respond to rapidly changing patterns of demand. Shortages developed, and units responded by reverting to traditional just-in-case logistics, stockpiling items that they thought they might need.
The Russian invasion of Ukraine in 2022 encountered severe logistical difficulties due to poor planning, notably a failure to anticipate the degree of resistance that was actually encountered. The logistical resources required were not on hand even though the capability existed. As equipment broke down through use and battle damage, a shortage of spare parts developed, which was compounded by inadequate numbers of trained maintenance personnel. Although Russia was the world's second largest producer of armaments, its industrial base still struggled to replace materiel losses incurred in high-intensity combat. Even routine sustainment became difficult, with ground transport subject to interdiction by standoff missiles. Strategic failure then followed from logistical failure.
Notes:
References:
Antiquity: Dalley, Stephanie (2017). "Assyrian Warfare". In Frahm, Eckart (ed.). A Companion to Assyria. John Wiley & Sons. ISBN 978-1-4443-3593-4. OCLC 957184612.
Davies, R. W. (1971). "The Roman Military Diet". Britannia. 2: 122–142. doi:10.2307/525803. ISSN 0068-113X. JSTOR 525803.
Engels, Donald W. (1980). Alexander the Great and the Logistics of the Macedonian Army. Berkeley: University of California Press. ISBN 978-0-520-04272-8. OCLC 12425877.
French, David (1998). "Pre- and Early-Roman Roads of Asia Minor. The Persian Royal Road". Iran. 36: 15–43. doi:10.2307/4299973. ISSN 0578-6967. JSTOR 4299973.
Hassig, Ross (1992). War and Society in Ancient Mesoamerica. Berkeley: University of California Press. ISBN 0-520-07734-2. OCLC 25007991.
Roth, Jonathan P. (1999). The Logistics of the Roman Army at War (264 BC - AD 235). Leiden: Brill. ISBN 978-90-04-11271-1. OCLC 39778767.
Wright, Robert (2001). Nonzero: History, Evolution & Human Cooperation. London: Little, Brown. ISBN 978-0-316-64485-3. OCLC 980591718.
Medieval: Ayton, Andrew (2007) [2005]. "The Battle of Crécy: Context and Significance" (PDF). In Ayton, Andrew & Preston, Philip (eds.). The Battle of Crécy, 1346. Woodbridge, Suffolk: Boydell Press. pp. 1–34. ISBN 978-1-84383-115-0. OCLC 56733244. Archived from the original (PDF) on 5 February 2019.
Bachrach, Bernard S.; Bachrach, David S. (2017). Warfare in Medieval Europe c.400-c.1453. Abingdon, Oxfordshire: Routledge. doi:10.4324/9781003032878-5. ISBN 978-1-003-03287-8. OCLC 1260343133.
Buell, Paul D. (1990). "Palate of the Qan: Changing Foodways of the Imperial Mongols". Mongolian Studies. 13 (The Hangin Memorial Issue): 57–81. ISSN 0190-3667. JSTOR 43193123.
Hardy, Robert (2010) [1976]. Longbow: A Social and Military History (PDF) (reprint of 4th ed.). Yeovil, Somerset: Haynes Publishing. ISBN 978-1-85260-620-6. OCLC 979490727. Archived (PDF) from the original on 6 December 2018. Retrieved 7 May 2019.
Hassig, Ross (1999). "The Aztec World". In Raaflaub, Kurt; Rosenstein, Nathan (eds.). War and Society in the Ancient and Medieval Worlds: Asia, the Mediterranean, Europe, and Mesoamerica. Cambridge, Massachusetts: Harvard University Press. pp. 361–388. ISBN 978-0-674-00659-1. OCLC 41601137.
McMahon, Lucas (2021). "Logistical Modelling of a Sea-Borne expedition in the Mediterranean: The Case of the Byzantine Invasion of Crete in AD 960". Mediterranean Historical Review. 36 (1): 63–94. doi:10.1080/09518967.2021.1900171. ISSN 0951-8967. S2CID 235676141.
Early modern: Chandler, David G. (2004). Blenheim Preparation: The English Army on the March to the Danube. Staplehurst, Kent: Spellmount. ISBN 978-1-873376-95-9. OCLC 56475274.
Duffy, Christopher (1988). The Military Experience in the Age of Reason. New York: Atheneum. ISBN 978-0-689-11993-4. OCLC 18166218.
Lyndon, Brian (Summer 1976). "Military Dress and Uniformity 1680-1720". Journal of the Society for Army Historical Research. 54 (218): 108–120. ISSN 0037-9700. JSTOR 44230315.
Lynn, John A. (1993a). "Food, Funds and Fortresses: Resource Mobilization and Positional Warfare in the Campaigns of Louis XIV". In Lynn, John A. (ed.). Feeding Mars: Logistics in Western Warfare from the Middle Ages to the Present. London and New York: Routledge. pp. 137–160. ISBN 978-0-367-15749-4. OCLC 1303906366.
Lynn, John A. (1993b). "The History of Logistics and Supplying War". In Lynn, John A. (ed.). Feeding Mars: Logistics in Western Warfare from the Middle Ages to the Present. London and New York: Routledge. pp. 9–30. ISBN 978-0-367-15749-4. OCLC 1303906366.
Morriss, Roger (July 2007). "Colonization, Conquest, and the Supply of Food and Transport: The Reorganization of Logistics Management, 1780–1795". War in History. 14 (3): 310–324. doi:10.1177/0968344507078377. ISSN 0968-3445. JSTOR 26070709. S2CID 111273795.
Parker, Geoffrey (June 1976). "The 'Military Revolution,' 1560-1660—a Myth?". The Journal of Modern History. 48 (2): 195–214. ISSN 0022-2801. JSTOR 1879826.
Parker, Geoffrey (1996). The Military Revolution: Military Innovation and the Rise of the West, 1500-1800 (second ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-47426-9. OCLC 32968694.
Shy, John (1993). "Logistical Crisis and the American Revolution: A Hypothesis". In Lynn, John A. (ed.). Feeding Mars: Logistics in Western Warfare from the Middle Ages to the Present. London and New York: Routledge. pp. 161–182. ISBN 978-0-367-15749-4. OCLC 1303906366.
Nineteenth century: Achan, Jane; Talisuna, Ambrose O.; Erhart, Annette; Yeka, Adoki; Tibenderana, James K.; Baliraine, Frederick N.; Rosenthal, Phillip J.; D'Alessandro, Umberto (24 May 2011). "Quinine, an Old Anti-Malarial Drug in a Modern World: Role in the Treatment of Malaria". Malaria Journal. 10: 144. doi:10.1186/1475-2875-10-144. ISSN 1475-2875. PMC 3121651. PMID 21609473.
Baron, Jeremy Hugh (2009). "Sailors' Scurvy Before and After James Lind - A Reassessment". |
mil_tactics_continued_pretraining.csv | History of military logistics | 161–182. ISBN 978-0-367-15749-4. OCLC 1303906366.
Nineteenth century: Achan, Jane; Talisuna, Ambrose O.; Erhart, Annette; Yeka, Adoki; Tibenderana, James K.; Baliraine, Frederick N.; Rosenthal, Phillip J.; D'Alessandro, Umberto (24 May 2011). "Quinine, an Old Anti-Malarial Drug in a Modern World: Role in the Treatment of Malaria". Malaria Journal. 10: 144. doi:10.1186/1475-2875-10-144. ISSN 1475-2875. PMC 3121651. PMID 21609473.
Baron, Jeremy Hugh (2009). "Sailors' Scurvy Before and After James Lind - A Reassessment". Nutrition Reviews. 67 (6): 315–332. doi:10.1111/j.1753-4887.2009.00205.x. ISSN 0029-6643. PMID 19519673.
Brett-James, A. (1 December 1963). "The Walcheren Failure, Part One". History Today. Vol. 13, no. 12. pp. 811–820. ISSN 0018-2753. Retrieved 6 June 2023.
Brett-James, A. (1 January 1964). "The Walcheren Failure, Part Two". History Today. Vol. 14, no. 1. pp. 60–68. ISSN 0018-2753. Retrieved 6 June 2023.
Clausewitz, Carl von (1989) [1976]. On War. Translated by Howard, Michael; Paret, Peter. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-05657-9. OCLC 24168145.
Chandler, David G. (1 February 1978). "The Lines of Torres Vedras, 1810-11: Wellington in Portugal". History Today. Vol. 28, no. 2. pp. 126–129. ISSN 0018-2753. ProQuest 1299032096 – via ProQuest.
Crump, Thomas (2007). A Brief History of the Age Of Steam: The Power that Drove the Industrial Revolution. London: Robinson. ISBN 978-1-84529-553-0. OCLC 213605104.
Howard, Martin R. (18–25 December 1999). "Walcheren 1809: A Medical Catastrophe". BMJ: British Medical Journal. 319 (7225): 1642–1645. ISSN 0959-8138. JSTOR 25186694.
Hess, Earl J. (2017). Civil War Logistics: A Study of Military Transportation. Baton Rouge: Louisiana State University. ISBN 978-0-8071-6750-2. OCLC 966593962.
MacArthur, Roderick (Winter 2009). "British Army Establishments During the Napoleonic Wars (Part 2): Cavalry, Artillery, Engineers and Supporting Units". Journal of the Society for Army Historical Research. 87 (352): 331–356. ISSN 0037-9700. JSTOR 44231710.
Morgan, John (January 2009). "War Feeding War? The Impact of Logistics on the Napoleonic Occupation of Catalonia". Journal of Military History. 73 (1): 83–116. doi:10.1353/jmh.0.0183. ISSN 0899-3718. S2CID 159770864.
Schneid, Frederick (2005). Napoleon's Conquest of Europe: The War of the Third Coalition. Westport: Praeger. ISBN 978-0-275-98096-2. OCLC 57134421.
First World War: Brown, Ian Malcolm (1998). British Logistics on the Western Front, 1914-1919. Westport, Connecticut: Praeger. ISBN 978-0-275-95894-7. OCLC 37043887.
Butler, Arthur Graham (1938). "The Gallipoli Campaign". In Butler, Arthur Graham (ed.). Gallipoli, Palestine and New Guinea. Official History of the Australian Army Medical Services, 1914–1918. Vol. I. Canberra: Australian War Memorial. pp. 1–546. OCLC 220879097.
Downes, Rupert (1938). "The Campaign in Sinai and Palestine". In Butler, Arthur Graham (ed.). Gallipoli, Palestine and New Guinea. Official History of the Australian Army Medical Services, 1914–1918. Vol. I. Canberra: Australian War Memorial. pp. 547–780. OCLC 220879097.
Edmonds, J. E. (1932), Military Operations: France and Belgium, 1916: Sir Douglas Haig's Command to the 1st July: Battle of the Somme, London: Macmillan, OCLC 219851438
Henniker, A. M. (1937). Transportation on the Western Front 1914–1918. London: HM Stationery Office. OCLC 608680057.
Second World War: Ballantine, Duncan S. (1947). U.S. Naval Logistics in the Second World War. Princeton, New Jersey: Princeton University Press. OCLC 1175973933.
Balsamo, Larry T. (May 1991). "Germany's Armed Forces in the Second World War: Manpower, Armaments, and Supply". The History Teacher. 24 (3): 263–277. doi:10.2307/494616. ISSN 0018-2745. JSTOR 494616.
Beaver, Daniel R. (1993). "'Deuce and a Half': Selecting U.S. Army Trucks, 1920–1945". In Lynn, John A. (ed.). Feeding Mars: Logistics in Western Warfare from the Middle Ages to the Present. London and New York: Routledge. pp. 251–260. ISBN 978-0-367-15749-4. OCLC 1303906366.
Bickel, Lennard (1995) [1972]. Florey: The Man Who Made Penicillin. Melbourne University Press Australian Lives. Carlton, Victoria: Melbourne University Press. ISBN 0-522-84712-9. OCLC 761193113.
Butlin, S. J.; Schedvin, C. B. (1977). War Economy 1942-1945. Canberra: Australian War Memorial. OCLC 3006988.
Carter, Worrall Reed (1953). Beans, Bullets, and Black Oil: The Story of Fleet Logistics Afloat in the Pacific During World War II. Washington, D.C.: Department of the Navy. OCLC 781884. Retrieved 30 May 2023.
Caruana, Joseph (December 2012). "Emergency Victualling of Malta During WWII". Warship International. 49 (4): 357–364. ISSN 0043-0374. JSTOR 44895976.
Coakley, Robert W.; Leighton, Richard M. (1968). Global Logistics and Strategy 1943–1945 (PDF). United States Army in World War II – The War Department. Washington, DC: Office of the Chief of Military History, Department of the Army. OCLC 23241977. Retrieved 16 June 2021.
Dick, C. J. (2016). From Victory to Stalemate – Decisive and Indecisive Military Operations. Vol. 1. Lawrence, Kansas: University Press of Kansas. ISBN 978-0-7006-2293-1. OCLC 1023039366.
Gropman, Alan (1997). "Industrial Mobilisation". In Gropman, Alan (ed.). The Big 'L': American Logistics in World War II (PDF). Washington, D.C.: National Defense University Press. pp. 1–96. ISBN 978-0-16-048668-5. OCLC 36888452. Retrieved 30 May 2023.
Hill, Alexander (2007). "British Lend Lease Aid and the Soviet War Effort, June 1941-June 1942". The Journal of Military History. 71 (3): 773–808. doi:10.1353/jmh.2007.0206. |
mil_tactics_continued_pretraining.csv | History of military logistics | Vol. 1. Lawrence, Kansas: University Press of Kansas. ISBN 978-0-7006-2293-1. OCLC 1023039366.
Gropman, Alan (1997). "Industrial Mobilisation". In Gropman, Alan (ed.). The Big 'L': American Logistics in World War II (PDF). Washington, D.C.: National Defense University Press. pp. 1–96. ISBN 978-0-16-048668-5. OCLC 36888452. Retrieved 30 May 2023.
Hill, Alexander (2007). "British Lend Lease Aid and the Soviet War Effort, June 1941-June 1942". The Journal of Military History. 71 (3): 773–808. doi:10.1353/jmh.2007.0206. ISSN 0899-3718. JSTOR 30052890.
Hone, Trent (April 2023). "From Mobile Fleet to Mobile Force: The Evolution of US Navy Logistics in the Central Pacific during World War II". The Journal of Military History. 87 (2): 367–403. ISSN 0899-3718.
Long, Gavin (1963). The Final Campaigns. Canberra: Australian War Memorial. OCLC 1297619. Retrieved 25 March 2019.
Leighton, Richard M.; Coakley, Robert W. (1954). Global Logistics and Strategy 1940-1943 (PDF). Washington, DC: Office of the Chief of Military History, Department of the Army. OCLC 49360588.
Losman, Donald L.; Kyriakopoulos, Irene; Ahalt, J. Dawson (1997). "The Economics of America's World War II Mobilisation". In Gropman, Alan (ed.). The Big 'L': American Logistics in World War II (PDF). Washington, D.C.: National Defense University Press. pp. 145–192. ISBN 978-0-16-048668-5. OCLC 36888452. Retrieved 30 May 2023.
Lutes, LeRoy (1993) [1948]. Logistics in World War II: Final Report of the Army Service Forces (PDF). Washington, DC: US Government Printing Office. OCLC 847595465. Retrieved 20 September 2021.
Ross, William F.; Romanus, Charles F. (1965). The Quartermaster Corps: Operations in the War Against Germany (PDF). Washington, DC: Center of Military History, United States Army. OCLC 56044302. Retrieved 23 February 2020.
Ruppenthal, Roland G. (1953). Logistical Support of the Armies (PDF). United States Army in World War II – The European Theater of Operations. Vol. I. Washington, DC: Center of Military History, United States Army. OCLC 640653201. Retrieved 14 July 2019.
Ruppenthal, Roland G. (1959). Logistical Support of the Armies (PDF). United States Army in World War II – The European Theater of Operations. Vol. II. Washington, DC: Center of Military History, United States Army. OCLC 8743709. Retrieved 6 March 2020.
Rutherford, Jeff (October 2021). "Germany's Total War: Combat and Occupation around the Kursk Salient, 1943". The Journal of Military History. 85 (4): 954–979. ISSN 0899-3718.
Sweeney, Tony (2003). Malaria Frontline: Australian Army Research during World War II. Carlton, Victoria: University of Melbourne Press. ISBN 0-522-85033-2. OCLC 52380928.
Waddell, Steve R. (1994). United States Army Logistics: The Normandy Campaign. Contributions in Military Studies, No. 155. Westport, Connecticut; London: Greenwood Press. ISBN 978-0-313-29054-1. OCLC 467960939.
Walker, Allan S. (1952). Clinical Problems of War. Australia in the War of 1939–1945 Series 5 – Medical. Canberra: Australian War Memorial. OCLC 8324033. Retrieved 2 March 2023.
Post Second World War: Bealt, Jennifer; Fernández Barrera, Jair Camilo; Mansouri, S. Afshin (August 2016). "Collaborative Relationships Between Logistics Service Providers and Humanitarian Organizations During Disaster Relief Operations". Journal of Humanitarian Logistics and Supply Chain Management. 6 (2): 118–144. doi:10.1108/JHLSCM-02-2015-0008. ISSN 2042-6747.
Conahan, Frank C. (10 January 1992). Operation Desert Storm: Early Performance Assessment of Bradley and Abrams (PDF) (Report). Washington, D.C.: United States General Accounting Office. GAO/NSIAD-92-94. Retrieved 3 June 2023.
Cohen, Eliot (Summer 1978). "NATO Standardization: The Perils of Common Sense". Foreign Policy (31): 72–90. doi:10.2307/1148145. ISSN 0015-7228. JSTOR 1148145.
Crawford, John; Harper, Glyn (2001). Operation East Timor: The New Zealand Defence Force in East Timor 1999–2001. Auckland: Reed Publishing. ISBN 0-7900-0823-8. OCLC 49616580.
Demchak, Chris (1991). Military Organizations, Complex Machines: Modernization in the U.S. Armed Services. Cornell Studies in Security Affairs. Ithaca: Cornell University Press. ISBN 978-0-8014-2468-7. OCLC 1083598079.
Foxton, P. D. (1994). Powering War: Modern Land Force Logistics. Land Warfare: Brassey's New Battlefield Weapons Systems and Technology Series. Vol. 11. London: Brassey's (UK). ISBN 978-1-85753-048-3. OCLC 28709906.
Heiser, Joseph M. Jr. (1974). Logistic Support. Vietnam Studies. Washington, D.C.: Department of the Army. OCLC 991692.
Huston, James A. (1989). Guns and Butter, Powder and Rice: U.S. Army Logistics in the Korean War. Selinsgrove, Pennsylvania: Susquehanna University Press. ISBN 978-0-941664-87-5. OCLC 18523064.
Krisinger, Chris J. (Spring 1989). "Operation Nickel Grass: Airlift in Support of National Policy" (PDF). Airpower Journal. 3 (1): 16–28. ISSN 1554-2505. Retrieved 31 May 2023.
Martin, Bradley; Barnett, D. Sean; McCarthy, Devin (1 January 2023). Russian Logistics and Sustainment Failures in the Ukraine Conflict (PDF) (Report). Research Reports. RAND Corporation. doi:10.7249/RRA2033-1. RR-A2033-1. Retrieved 20 October 2023.
McGrath, John J. (2007). The Other End of the Spear: The Tooth-to-Tail Ratio (T3R) in Modern Military Operations (PDF). The Long War Series. Fort Leavenworth, Kansas: Combat Studies Institute Press. ISBN 978-0-16-078944-1. OCLC 154309350. Occasional Paper 23. Retrieved 4 June 2023.
Menarchik, Douglas (1993). Powerlift—Getting to Desert Storm: Strategic Transportation and Strategy in the New World Order. Westport, Connecticut: Praeger. ISBN 978-0-275-94642-5. OCLC 27430669.
National Research Council (1999). Reducing the Logistics Burden for the Army After Next: Doing More with Less. Washington, D.C.: National Academy Press. doi:10.17226/6402. ISBN 978-0-309-06378-4. OCLC 41228012.
Privratsky, Kenneth L. (2014). Logistics in the Falklands War. Barnsley, South Yorkshire: Pen and Sword Books. ISBN 978-1-47382-312-9. OCLC 890938195.
Shrader, Charles R. (1999). |
mil_tactics_continued_pretraining.csv | History of military logistics | Powerlift—Getting to Desert Storm: Strategic Transportation and Strategy in the New World Order. Westport, Connecticut: Praeger. ISBN 978-0-275-94642-5. OCLC 27430669.
National Research Council (1999). Reducing the Logistics Burden for the Army After Next: Doing More with Less. Washington, D.C.: National Academy Press. doi:10.17226/6402. ISBN 978-0-309-06378-4. OCLC 41228012.
Privratsky, Kenneth L. (2014). Logistics in the Falklands War. Barnsley, South Yorkshire: Pen and Sword Books. ISBN 978-1-47382-312-9. OCLC 890938195.
Shrader, Charles R. (1999). The First Helicopter War: Logistics and Mobility in Algeria, 1954-1962. Westport, Connecticut: Praeger. ISBN 978-0-275-96388-0. OCLC 39963378.
Shrader, Charles R. (2015). A War of Logistics: Parachutes and Porters in Indochina, 1945–1954. Lexington, Kentucky: University Press of Kentucky. ISBN 978-0-8131-6575-2. OCLC 908071869.
Staats, Elmer B. (19 January 1978). Standardization in NATO: Improving The Effectiveness and Economy of Mutual Defense Efforts (PDF) (Report). Washington, D.C.: United States General Accounting Office. Retrieved 5 June 2023.
Thompson, Julian (1985). No Picnic: 3 Commando Brigade in the Falklands. London: Leo Cooper. ISBN 0-436-52052-4. OCLC 924649440.
Ti, Ronald; Kinsey, Christopher (21 July 2023). "Lessons from the Russo-Ukrainian Conflict: The primacy of Logistics Over Strategy". Defence Studies. 23 (3): 381–398. doi:10.1080/14702436.2023.2238613. ISSN 2324-9315.
Wallis, Eric T. (May–June 2008). "From Just In Case to Just In Time" (PDF). Army Logistician. 40 (3): 36–38. ISSN 0004-2528. Retrieved 23 June 2023.
General: Antill, Peter D. (2018). "Defence Logistics: A Historical Perspective". In Smith, Jeremy C. D. (ed.). Defence Logistics. London: Kogan Page. pp. 35–63. ISBN 978-0-7494-7803-2. OCLC 1020465815.
Black, Jeremy (2021). Logistics: The Key to Victory. Barnsley, South Yorkshire: Pen and Sword. ISBN 978-1-39900-601-9.
Creveld, Martin van (1997) [1977]. Supplying War: Logistics from Wallenstein to Patton. Cambridge: Cambridge University Press. ISBN 978-0-521-21730-9. OCLC 318940605.
Dyer, Gwynne (1985). War. London: The Bodley Head. ISBN 978-0-370-30729-9. OCLC 13096168.
Huston, James A. (1966). The Sinews of War: Army Logistics 1775-1953 (PDF). Army Historical Series. Washington, DC: Center of Military History, United States Army. OCLC 573210. Retrieved 2 June 2023.
Kress, Moshe (2002). Operational Logistics: The Art and Science of Sustaining Military Operations. Kluwer Academic Publishers. ISBN 978-1-4020-7084-6. OCLC 936710657.
Lynn, John A., ed. (1993). Feeding Mars: Logistics in Western Warfare from the Middle Ages to the Present. London and New York: Routledge. ISBN 978-0-367-15749-4. OCLC 1303906366.
Macksey, Kenneth (1989). For Want of a Nail: The Impact of War on Logistics and Communications. London: Brassey's. ISBN 978-0-08-036268-7. OCLC 19589142.
Madigan, Russel (1988). Technology in Australia, 1788-1988: A Condensed History of Australian Technological Innovation and Adaptation During the First Two Hundred Years. Melbourne: Australian Academy of Technological Sciences and Engineering. Retrieved 28 May 2023.
Mann, Michael (2012). The Sources of Social Power. Vol. 1: A History of Power from the Beginning to AD 1760. Cambridge University Press. ISBN 978-1-107-03117-3. OCLC 863598819.
Quail, Geoffrey Grant (2017). Lessons Learned: The Australian Military and Tropical Medicine. Newport, New South Wales: Big Sky Publishing. ISBN 978-1-925520-22-4. OCLC 964933604.
Robertson, Gordon L. (2012). Food Packaging (third ed.). Boca Raton, Florida: CRC Press. ISBN 978-1-4398-6242-1. OCLC 883130776.
Rutner, Stephen M.; Aviles, Maria; Cox, Scott (2012). "Logistics Evolution: A Comparison of Military and Commercial Logistics Thought". The International Journal of Logistics Management. 23 (1): 96–118. doi:10.1108/09574091211226948. ISSN 0957-4093.
Serrano, A.; Kalenatic, D.; López, C.; Montoya-Torres, J.R. (2023). "Evolution of Military Logistics". Logistics. 7 (2): 22. doi:10.3390/logistics7020022. ISSN 2305-6290.
Thompson, Julian (1991). Lifeblood of War: Logistics in Armed Conflict. London: Brassey's. ISBN 978-0-08-040977-1. OCLC 260185060.
External links: Media related to Military logistics at Wikimedia Commons
Military Logistics: A Brief History
Defence Logistics in Military History – An Analysis:
Part One · Part Two · Part Three · Part Four |
mil_tactics_continued_pretraining.csv | Horses in warfare | Types of horse used in warfare: A fundamental principle of equine conformation is "form to function". Therefore, the type of horse used for various forms of warfare depended on the work performed, the weight a horse needed to carry or pull, and distance travelled. Weight affects speed and endurance, creating a trade-off: armour added protection, but added weight reduced maximum speed. Therefore, various cultures had different military needs. In some situations, one primary type of horse was favoured over all others. In other places, multiple types were needed; warriors would travel to battle riding a lighter horse of greater speed and endurance, and then switch to a heavier horse, with greater weight-carrying capacity, when wearing heavy armour in actual combat.
The average horse can carry up to approximately 30% of its body weight. While all horses can pull more weight than they can carry, the maximum weight that horses can pull varies widely, depending on the build of the horse, the type of vehicle, road conditions, and other factors. Horses harnessed to a wheeled vehicle on a paved road can pull as much as eight times their weight, but far less if pulling wheelless loads over unpaved terrain. Thus, horses that were driven varied in size and had to make a trade-off between speed and weight, just as did riding animals. Light horses could pull a small war chariot at speed. Heavy supply wagons, artillery, and support vehicles were pulled by heavier horses or a larger number of horses. The method by which a horse was hitched to a vehicle also mattered: horses could pull greater weight with a horse collar than they could with a breast collar, and even less with an ox yoke.
Light-weight: Light, oriental horses such as the ancestors of the modern Arabian, Barb, and Akhal-Teke were used for warfare that required speed, endurance, and agility. Such horses ranged from about 12 hands (48 inches, 122 cm) to just under 15 hands (60 inches, 152 cm), weighing approximately 360 to 450 kilograms (800 to 1,000 lb). To move quickly, riders had to use lightweight tack and carry relatively light weapons such as bows, light spears, javelins, or later rifles. This was the original horse used for early chariot warfare, raiding, and light cavalry.
Relatively light horses were used by many cultures, including the Ancient Egyptians, the Mongols, the Arabs, and the Native Americans. Throughout the Ancient Near East, small, light animals were used to pull chariots designed to carry no more than two passengers, a driver and a warrior. In the European Middle Ages, a lightweight war horse became known as the rouncey.
Medium-weight: Medium-weight horses developed as early as the Iron Age with the needs of various civilizations to pull heavier loads, such as chariots capable of holding more than two people, and, as light cavalry evolved into heavy cavalry, to carry heavily armoured riders. The Scythians were among the earliest cultures to produce taller, heavier horses. Larger horses were also needed to pull supply wagons and, later on, artillery pieces. In Europe, horses were also used to a limited extent to maneuver cannons on the battlefield as part of dedicated horse artillery units. Medium-weight horses had the greatest range in size, from about 14.2 hands (58 inches, 147 cm) but stocky, to as much as 16 hands (64 inches, 163 cm), weighing approximately 450 to 540 kilograms (1,000 to 1,200 lb). They generally were quite agile in combat, though they did not have the raw speed or endurance of a lighter horse. By the Middle Ages, larger horses in this class were sometimes called destriers. They may have resembled modern Baroque or heavy warmblood breeds. Later, horses similar to the modern warmblood often carried European cavalry.
Heavy-weight: Large, heavy horses, weighing from 680 to 910 kilograms (1,500 to 2,000 lb), the ancestors of today's draught horses, were used, particularly in Europe, from the Middle Ages onward. They pulled heavy loads like supply wagons and were disposed to remain calm in battle. Some historians believe they may have carried the heaviest-armoured knights of the Late Medieval Period, though others dispute this claim, indicating that the destrier, or knight's battle horse, was a medium-weight animal. It is also disputed whether the destrier class included draught animals or not. Breeds at the smaller end of the heavyweight category may have included the ancestors of the Percheron, agile for their size and physically able to maneuver in battle.
Ponies: The British Army's 2nd Dragoons in 1813 had 340 ponies of 14.2 hands (58 inches, 147 cm) and 55 ponies of 14 hands (56 inches, 142 cm); the Lovat Scouts, formed in 1899, were mounted on Highland ponies; the British Army recruited 200 Dales ponies in World War II for use as pack and artillery animals; and the British Territorial Army experimented with the use of Dartmoor ponies as pack animals in 1935, finding them to be better than mules for the job.
Other equids: Horses were not the only equids used to support human warfare. Donkeys have been used as pack animals from antiquity to the present. Mules were also commonly used, especially as pack animals and to pull wagons, but also occasionally for riding. Because mules are often both calmer and hardier than horses, they were particularly useful for strenuous support tasks, such as hauling supplies over difficult terrain. However, under gunfire, they were less cooperative than horses, so were generally not used to haul artillery on battlefields. The size of a mule and work to which it was put depended largely on the breeding of the mare that produced the mule. Mules could be lightweight, medium weight, or even, when produced from draught horse mares, of moderate heavy weight.
Training and deployment: The oldest known manual on training horses for chariot warfare was written c. 1350 BC by the Hittite horsemaster, Kikkuli. An ancient manual on the subject of training riding horses, particularly for the Ancient Greek cavalry is Hippike (On Horsemanship) written about 360 BC by the Greek cavalry officer Xenophon. and another early text was that of Kautilya, written about 323 BC.
Whether horses were trained to pull chariots, to be ridden as light or heavy cavalry, or to carry the armoured knight, much training was required to overcome the horse's natural instinct to flee from noise, the smell of blood, and the confusion of combat. They also learned to accept any sudden or unusual movements of humans while using a weapon or avoiding one. Horses used in close combat may have been taught, or at least permitted, to kick, strike, and even bite, thus becoming weapons themselves for the warriors they carried.
In most cultures, a war horse used as a riding animal was trained to be controlled with limited use of reins, responding primarily to the rider's legs and weight. The horse became accustomed to any necessary tack and protective armour placed upon it, and learned to balance under a rider who would also be laden with weapons and armour. Developing the balance and agility of the horse was crucial. The origins of the discipline of dressage came from the need to train horses to be both obedient and manoeuvrable. The Haute ecole or "High School" movements of classical dressage taught today at the Spanish Riding School have their roots in manoeuvres designed for the battlefield. However, the airs above the ground were unlikely to have been used in actual combat, as most would have exposed the unprotected underbelly of the horse to the weapons of foot soldiers.
Horses used for chariot warfare were not only trained for combat conditions, but because many chariots were pulled by a team of two to four horses, they also had to learn to work together with other animals in close quarters under chaotic conditions.
Technological innovations: Horses were probably ridden in prehistory before they were driven. However, evidence is scant, mostly simple images of human figures on horse-like animals drawn on rock or clay. The earliest tools used to control horses were bridles of various sorts, which were invented nearly as soon as the horse was domesticated. Evidence of bit wear appears on the teeth of horses excavated at the archaeology sites of the Botai culture in northern Kazakhstan, dated 3500–3000 BC.
Harness and vehicles: The invention of the wheel was a major technological innovation that gave rise to chariot warfare. At first, equines, both horses and onagers, were hitched to wheeled carts by means of a yoke around their necks in a manner similar to that of oxen. However, such a design is incompatible with equine anatomy, limiting both the strength and mobility of the animal. By the time of the Hyksos invasions of Egypt, c. 1600 BC, horses were pulling chariots with an improved harness design that made use of a breastcollar and breeching, which allowed a horse to move faster and pull more weight.
Even after the chariot had become obsolete as a tool of war, there still was a need for technological innovations in pulling technologies; horses were needed to pull heavy loads of supplies and weapons. |
mil_tactics_continued_pretraining.csv | Horses in warfare | Harness and vehicles: The invention of the wheel was a major technological innovation that gave rise to chariot warfare. At first, equines, both horses and onagers, were hitched to wheeled carts by means of a yoke around their necks in a manner similar to that of oxen. However, such a design is incompatible with equine anatomy, limiting both the strength and mobility of the animal. By the time of the Hyksos invasions of Egypt, c. 1600 BC, horses were pulling chariots with an improved harness design that made use of a breastcollar and breeching, which allowed a horse to move faster and pull more weight.
Even after the chariot had become obsolete as a tool of war, there still was a need for technological innovations in pulling technologies; horses were needed to pull heavy loads of supplies and weapons. The invention of the horse collar in China during the 5th century AD (Northern and Southern dynasties) allowed horses to pull greater weight than they could when hitched to a vehicle with the ox yokes or breast collars used in earlier times. The horse collar arrived in Europe during the 9th century, and became widespread by the 12th century.
Riding equipment: Two major innovations that revolutionised the effectiveness of mounted warriors in battle were the saddle and the stirrup. Riders quickly learned to pad their horse's backs to protect themselves from the horse's spine and withers, and fought on horseback for centuries with little more than a blanket or pad on the horse's back and a rudimentary bridle. To help distribute the rider's weight and protect the horse's back, some cultures created stuffed padding that resembles the panels of today's English saddle. Both the Scythians and Assyrians used pads with added felt attached with a surcingle or girth around the horse's barrel for increased security and comfort. Xenophon mentioned the use of a padded cloth on cavalry mounts as early as the 4th century BC.
The saddle with a solid framework, or "tree", provided a bearing surface to protect the horse from the weight of the rider, but was not widespread until the 2nd century AD. However, it made a critical difference, as horses could carry more weight when distributed across a solid saddle tree. A solid tree, the predecessor of today's Western saddle, also allowed a more built-up seat to give the rider greater security in the saddle. The Romans are credited with the invention of the solid-treed saddle.
An invention that made cavalry particularly effective was the stirrup. A toe loop that held the big toe was used in India possibly as early as 500 BC, and later a single stirrup was used as a mounting aid. The first set of paired stirrups appeared in China about 322 AD during the Jin dynasty. Following the invention of paired stirrups, which allowed a rider greater leverage with weapons, as well as both increased stability and mobility while mounted, nomadic groups such as the Mongols adopted this technology and developed a decisive military advantage. By the 7th century, due primarily to invaders from Central Asia, stirrup technology spread from Asia to Europe. The Avar invaders are viewed as primarily responsible for spreading the use of the stirrup into central Europe. However, while stirrups were known in Europe in the 8th century, pictorial and literary references to their use date only from the 9th century. Widespread use in Northern Europe, including England, is credited to the Vikings, who spread the stirrup in the 9th and 10th centuries to those areas.
Tactics: The first archaeological evidence of horses used in warfare dates from between 4000 and 3000 BC in the steppes of Eurasia, in what today is Ukraine, Hungary, and Romania. Not long after domestication of the horse, people in these locations began to live together in large fortified towns for protection from the threat of horseback-riding raiders, who could attack and escape faster than people of more sedentary cultures could follow. Horse-mounted nomads of the steppe and current day Eastern Europe spread Indo-European Languages as they conquered other tribes and groups.
The use of horses in organised warfare was documented early in recorded history. One of the first depictions is the "war panel" of the Standard of Ur, in Sumer, dated c. 2500 BC, showing horses (or possibly onagers or mules) pulling a four-wheeled wagon.
Chariot warfare: Among the earliest evidence of chariot use are the burials of horse and chariot remains by the Andronovo (Sintashta-Petrovka) culture in modern Russia and Kazakhstan, dated to approximately 2000 BC. The oldest documentary evidence of what was probably chariot warfare in the Ancient Near East is the Old Hittite Anitta text, of the 18th century BC, which mentioned 40 teams of horses at the siege of Salatiwara. The Hittites became well known throughout the ancient world for their prowess with the chariot. Widespread use of the chariot in warfare across most of Eurasia coincides approximately with the development of the composite bow, known from c. 1600 BC. Further improvements in wheels and axles, as well as innovations in weaponry, soon resulted in chariots being driven in battle by Bronze Age societies from China to Egypt.
The Hyksos invaders brought the chariot to Ancient Egypt in the 16th century BC and the Egyptians adopted its use from that time forward. The oldest preserved text related to the handling of war horses in the ancient world is the Hittite manual of Kikkuli, which dates to about 1350 BC, and describes the conditioning of chariot horses.
Chariots existed in the Minoan civilization, as they were inventoried on storage lists from Knossos in Crete, dating to around 1450 BC. Chariots were also used in China as far back as the Shang dynasty (c. 1600–1050 BC), where they appear in burials. The high point of chariot use in China was in the Spring and Autumn period (770–476 BC), although they continued in use up until the 2nd century BC.
Descriptions of the tactical role of chariots in Ancient Greece and Rome are rare. The Iliad, possibly referring to Mycenaen practices used c. 1250 BC, describes the use of chariots for transporting warriors to and from battle, rather than for actual fighting. Later, Julius Caesar, invading Britain in 55 and 54 BC, noted British charioteers throwing javelins, then leaving their chariots to fight on foot.
Cavalry: Some of the earliest examples of horses being ridden in warfare were horse-mounted archers or javelin-throwers, dating to the reigns of the Assyrian rulers Ashurnasirpal II and Shalmaneser III. However, these riders sat far back on their horses, a precarious position for moving quickly, and the horses were held by a handler on the ground, keeping the archer free to use the bow. Thus, these archers were more a type of mounted infantry than true cavalry. The Assyrians developed cavalry in response to invasions by nomadic people from the north, such as the Cimmerians, who entered Asia Minor in the 8th century BC and took over parts of Urartu during the reign of Sargon II, approximately 721 BC. Mounted warriors such as the Scythians also had an influence on the region in the 7th century BC. By the reign of Ashurbanipal in 669 BC, the Assyrians had learned to sit forward on their horses in the classic riding position still seen today and could be said to be true light cavalry. The ancient Greeks used both light horse scouts and heavy cavalry, although not extensively, possibly due to the cost of keeping horses.
Heavy cavalry was believed to have been developed by the Ancient Persians, although others argue for the Sarmatians. By the time of Darius (558–486 BC), Persian military tactics required horses and riders that were completely armoured, and selectively bred a heavier, more muscled horse to carry the additional weight. The cataphract was a type of heavily armoured cavalry with distinct tactics, armour, and weaponry used from the time of the Persians up until the Middle Ages.
In Ancient Greece, Phillip of Macedon is credited with developing tactics allowing massed cavalry charges. The most famous Greek heavy cavalry units were the companion cavalry of Alexander the Great. The Chinese of the 4th century BC during the Warring States period (403–221 BC) began to use cavalry against rival states. To fight nomadic raiders from the north and west, the Chinese of the Han dynasty (202 BC – 220 AD) developed effective mounted units. Cavalry was not used extensively by the Romans during the Roman Republic period, but by the time of the Roman Empire, they made use of heavy cavalry. However, the backbone of the Roman army was the infantry.
Horse artillery: Once gunpowder was invented, another major use of horses was as draught animals for heavy artillery, or cannon. In addition to field artillery, where horse-drawn guns were attended by gunners on foot, many armies had artillery batteries where each gunner was provided with a mount. Horse artillery units generally used lighter pieces, pulled by six horses. "9-pounders" were pulled by eight horses, and heavier artillery pieces needed a team of twelve. |
mil_tactics_continued_pretraining.csv | Horses in warfare | To fight nomadic raiders from the north and west, the Chinese of the Han dynasty (202 BC – 220 AD) developed effective mounted units. Cavalry was not used extensively by the Romans during the Roman Republic period, but by the time of the Roman Empire, they made use of heavy cavalry. However, the backbone of the Roman army was the infantry.
Horse artillery: Once gunpowder was invented, another major use of horses was as draught animals for heavy artillery, or cannon. In addition to field artillery, where horse-drawn guns were attended by gunners on foot, many armies had artillery batteries where each gunner was provided with a mount. Horse artillery units generally used lighter pieces, pulled by six horses. "9-pounders" were pulled by eight horses, and heavier artillery pieces needed a team of twelve. With the individual riding horses required for officers, surgeons and other support staff, as well as those pulling the artillery guns and supply wagons, an artillery battery of six guns could require 160 to 200 horses. Horse artillery usually came under the command of cavalry divisions, but in some battles, such as Waterloo, the horse artillery were used as a rapid response force, repulsing attacks and assisting the infantry. Agility was important; the ideal artillery horse was 1.5 to 1.6 metres (15 to 16 hands) high, strongly built, but able to move quickly.
Asia:
Central Asia: Relations between steppe nomads and the settled people in and around Central Asia were often marked by conflict. The nomadic lifestyle was well suited to warfare, and steppe cavalry became some of the most militarily potent forces in the world, only limited by nomads' frequent lack of internal unity. Periodically, strong leaders would organise several tribes into one force, creating an almost unstoppable power. These unified groups included the Huns, who invaded Europe, and under Attila, conducted campaigns in both eastern France and northern Italy, over 500 miles apart, within two successive campaign seasons. Other unified nomadic forces included the Wu Hu rebellions in China, and the Mongol conquest of much of Eurasia.
South Asia: The literature of ancient India describes numerous horse nomads. Some of the earliest references to the use of horses in South Asian warfare are Puranic texts, which refer to an attempted invasion of India by the joint cavalry forces of the Sakas, Kambojas, Yavanas, Pahlavas, and Paradas, called the "five hordes" (pañca.ganah) or "Kśatriya" hordes (Kśatriya ganah). About 1600 BC, they captured the throne of Ayodhya by dethroning the Vedic king, Bahu. Later texts, such as the Mahābhārata, c. 950 BC, appear to recognise efforts taken to breed war horses and develop trained mounted warriors, stating that the horses of the Sindhu and Kamboja regions were of the finest quality, and the Kambojas, Gandharas, and Yavanas were expert in fighting from horses.
In technological innovation, the early toe loop stirrup is credited to the cultures of India, and may have been in use as early as 500 BC. Not long after, the cultures of Mesopotamia and Ancient Greece clashed with those of central Asia and India. Herodotus (484–425 BC) wrote that Gandarian mercenaries of the Achaemenid Empire were recruited into the army of emperor Xerxes I of Persia (486–465 BC), which he led against the Greeks. A century later, the "Men of the Mountain Land," from north of Kabul River, served in the army of Darius III of Persia when he fought against Alexander the Great at Arbela in 331 BC. In battle against Alexander at Massaga in 326 BC, the Assakenoi forces included 20,000 cavalry. The Mudra-Rakshasa recounted how cavalry of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas, and Bahlikas helped Chandragupta Maurya (c. 320–298 BC) defeat the ruler of Magadha and take the throne, thus laying the foundations of Mauryan dynasty in Northern India.
Mughal cavalry used gunpowder weapons, but were slow to replace the traditional composite bow. Under the impact of European military successes in India, some Indian rulers adopted the European system of massed cavalry charges, although others did not. By the 18th century, Indian armies continued to field cavalry, but mainly of the heavy variety.
East Asia: The Chinese used chariots for horse-based warfare until light cavalry forces became common during the Warring States era (402–221 BC). A major proponent of the change to riding horses from chariots was Wu Ling, c. 320 BC. However, conservative forces in China often opposed change, as cavalry did not benefit from the additional cachet attached to being the military branch dominated by the nobility as in medieval Europe. Nevertheless, during the reign of Emperor Wu of Han (r. 141–87 BC), it is recorded that 300,000 government-owned horses were insufficient for the cavalry and baggage trains of the Han military in the campaigns to expel the Xiongnu nomads from the Ordos Desert, Qilian Mountains, Khangai Mountains and Gobi Desert, spurring new policies that encouraged households to hand over privately-bred horses in exchange for military and corvee labor exemptions.
The Japanese samurai fought as cavalry for many centuries. They were particularly skilled in the art of using archery from horseback. The archery skills of mounted samurai were developed by training such as Yabusame, which originated in 530 AD and reached its peak under Minamoto no Yoritomo (1147–1199 AD) in the Kamakura period. They switched from an emphasis on mounted bowmen to mounted spearmen during the Sengoku period (1467–1615 AD).
Middle East: During the period when various Islamic empires controlled much of the Middle East as well as parts of West Africa and the Iberian peninsula, Muslim armies consisted mostly of cavalry, made up of fighters from various local groups, mercenaries and Turkoman tribesmen. The latter were considered particularly skilled as both lancers and archers from horseback. In the 9th century the use of Mamluks, slaves raised to be soldiers for various Muslim rulers, became increasingly common. Mobile tactics, advanced breeding of horses, and detailed training manuals made Mamluk cavalry a highly efficient fighting force. The use of armies consisting mostly of cavalry continued among the Turkish people who founded the Ottoman Empire. Their need for large mounted forces led to an establishment of the sipahi, cavalry soldiers who were granted lands in exchange for providing military service in times of war.
Mounted Muslim warriors conquered North Africa and the Iberian Peninsula during the 7th and 8th centuries AD following the Hijrah, of Muhammad in 622 AD. By 630 AD, their influence expanded across the Middle East and into western North Africa. By 711 AD, the light cavalry of Muslim warriors had reached Spain, and controlled most of the Iberian peninsula by 720. Their mounts were of various oriental types, including the North African Barb. A few Arabian horses may have come with the Ummayads who settled in the Guadalquivir valley. Another strain of horse that came with Islamic invaders was the Turkoman horse. Muslim invaders travelled north from present-day Spain into France, where they were defeated by the Frankish ruler Charles Martel at the Battle of Tours in 732 AD.
Europe:
Antiquity:
Middle Ages: During the European Middle Ages, there were three primary types of war horses: the destrier, the courser, and the rouncey, which differed in size and usage. A generic word used to describe medieval war horses was charger, which appears interchangeable with the other terms. The medieval war horse was of moderate size, rarely exceeding 15.2 hands (62 inches, 157 cm). Heavy horses were logistically difficult to maintain and less adaptable to varied terrains. The destrier of the early Middle Ages was moderately larger than the courser or rouncey, in part to accommodate heavier armoured knights. However, destriers were not as large as draught horses, averaging between 14.2 hands (58 inches, 147 cm) and 15 hands (60 inches, 152 cm). On the European continent, the need to carry more armour against mounted enemies such as the Lombards and Frisians led to the Franks developing heavier, bigger horses. As the amount of armour and equipment increased in the later Middle Ages, the height of the horses increased; some late medieval horse skeletons were of horses over 1.5 metres (15 hands).
Stallions were often used as destriers due to their natural aggression. However, there may have been some use of mares by European warriors, and mares, who were quieter and less likely to call out and betray their position to the enemy, were the preferred war horse of the Moors, who invaded various parts of Southern Europe from 700 AD through the 15th century. Geldings were used in war by the Teutonic Knights, and known as "monk horses" (German: Mönchpferde or Mönchhengste). |
mil_tactics_continued_pretraining.csv | Horses in warfare | On the European continent, the need to carry more armour against mounted enemies such as the Lombards and Frisians led to the Franks developing heavier, bigger horses. As the amount of armour and equipment increased in the later Middle Ages, the height of the horses increased; some late medieval horse skeletons were of horses over 1.5 metres (15 hands).
Stallions were often used as destriers due to their natural aggression. However, there may have been some use of mares by European warriors, and mares, who were quieter and less likely to call out and betray their position to the enemy, were the preferred war horse of the Moors, who invaded various parts of Southern Europe from 700 AD through the 15th century. Geldings were used in war by the Teutonic Knights, and known as "monk horses" (German: Mönchpferde or Mönchhengste). One advantage was if captured by the enemy, they could not be used to improve local bloodstock, thus maintaining the Knights' superiority in horseflesh.
Uses: The heavy cavalry charge, while it could be effective, was not a common occurrence. Battles were rarely fought on land suitable for heavy cavalry. While mounted riders remained effective for initial attacks, by the end of the 14th century, it was common for knights to dismount to fight, while their horses were sent to the rear, kept ready for pursuit. Pitched battles were avoided if possible, with most offensive warfare in the early Middle Ages taking the form of sieges, and in the later Middle Ages as mounted raids called chevauchées, with lightly armed warriors on swift horses.
The war horse was also seen in hastiludes – martial war games such as the joust, which began in the 11th century both as sport and to provide training for battle. Specialised destriers were bred for the purpose, although the expense of keeping, training, and outfitting them kept the majority of the population from owning one. While some historians suggest that the tournament had become a theatrical event by the 15th and 16th centuries, others argue that jousting continued to help cavalry train for battle until the Thirty Years' War.
Transition: The decline of the armoured knight was probably linked to changing structures of armies and various economic factors, and not obsolescence due to new technologies. However, some historians attribute the demise of the knight to the invention of gunpowder, or to the English longbow. Some link the decline to both technologies. Others argue these technologies actually contributed to the development of knights: plate armour was first developed to resist early medieval crossbow bolts, and the full harness worn by the early 15th century developed to resist longbow arrows. From the 14th century onwards, most plate was made from hardened steel, which resisted early musket ammunition. In addition, stronger designs did not make plate heavier; a full harness of musket-proof plate from the 17th century weighed 70 pounds (32 kg), significantly less than 16th century tournament armour.
The move to predominately infantry-based battles from 1300 to 1550 was linked to both improved infantry tactics and changes in weaponry. By the 16th century, the concept of a combined-arms professional army had spread throughout Europe. Professional armies emphasized training, and were paid via contracts, a change from the ransom and pillaging which reimbursed knights in the past. When coupled with the rising costs involved in outfitting and maintaining armour and horses, the traditional knightly classes began to abandon their profession. Light horses, or prickers, were still used for scouting and reconnaissance; they also provided a defensive screen for marching armies. Large teams of draught horses or oxen pulled the heavy early cannon. Other horses pulled wagons and carried supplies for the armies.
Early modern period: During the early modern period the shift continued from heavy cavalry and the armoured knight to unarmoured light cavalry, including Hussars and Chasseurs à cheval. Light cavalry facilitated better communication, using fast, agile horses to move quickly across battlefields. The ratio of footmen to horsemen also increased over the period as infantry weapons improved and footmen became more mobile and versatile, particularly once the musket bayonet replaced the more cumbersome pike. During the Elizabethan era, mounted units included cuirassiers, heavily armoured and equipped with lances; light cavalry, who wore mail and bore light lances and pistols; and "petronels", who carried an early carbine. As heavy cavalry use declined armour was increasingly abandoned and dragoons, whose horses were rarely used in combat, became more common: mounted infantry provided reconnaissance, escort and security. However, many generals still used the heavy mounted charge, from the late 17th century and early 18th century, where sword-wielding wedge-formation shock troops penetrated enemy lines, to the early 19th century, where armoured heavy cuirassiers were employed.
Light cavalry continued to play a major role, particularly after the Seven Years' War when Hussars started to play a larger part in battles. Though some leaders preferred tall horses for their mounted troops this was as much for prestige as for increased shock ability and many troops used more typical horses, averaging 15 hands. Cavalry tactics altered with fewer mounted charges, more reliance on drilled maneuvers at the trot, and use of firearms once within range. Ever-more elaborate movements, such as wheeling and caracole, were developed to facilitate the use of firearms from horseback. These tactics were not greatly successful in battle since pikemen protected by musketeers could deny cavalry room to manoeuvre. However the advanced equestrianism required survives into the modern world as dressage. While restricted, cavalry was not rendered obsolete. As infantry formations developed in tactics and skills, artillery became essential to break formations; in turn, cavalry was required to both combat enemy artillery, which was susceptible to cavalry while deploying, and to charge enemy infantry formations broken by artillery fire. Thus, successful warfare depended in a balance of the three arms: cavalry, artillery and infantry.
As regimental structures developed many units selected horses of uniform type and some, such as the Royal Scots Greys, even specified colour. Trumpeters often rode distinctive horses so they stood out. Regional armies developed type preferences, such as British hunters, Hanoverians in central Europe, and steppe ponies of the Cossacks, but once in the field, the lack of supplies typical of wartime meant that horses of all types were used. Since horses were such a vital component of most armies in early modern Europe, many instituted state stud farms to breed horses for the military. However, in wartime, supply rarely matched the demand, resulting in some cavalry troops fighting on foot.
19th century: In the 19th century distinctions between heavy and light cavalry became less significant; by the end of the Peninsular War, heavy cavalry were performing the scouting and outpost duties previously undertaken by light cavalry, and by the end of the 19th century the roles had effectively merged. Most armies at the time preferred cavalry horses to stand 15.2 hands (62 inches, 157 cm) and weigh 990 to 1,100 pounds (450 to 500 kg), although cuirassiers frequently had heavier horses. Lighter horses were used for scouting and raiding. Cavalry horses were generally obtained at 5 years of age and were in service from 10 to 12 years, barring loss. However losses of 30–40% were common during a campaign due to conditions of the march as well as enemy action. Mares and geldings were preferred over less-easily managed stallions.
During the French Revolutionary Wars and the Napoleonic Wars the cavalry's main offensive role was as shock troops. In defence cavalry were used to attack and harass the enemy's infantry flanks as they advanced. Cavalry were frequently used prior to an infantry assault, to force an infantry line to break and reform into formations vulnerable to infantry or artillery. Infantry frequently followed behind in order to secure any ground won or the cavalry could be used to break up enemy lines following a successful infantry action.
Mounted charges were carefully managed. A charge's maximum speed was 20 km/h; moving faster resulted in a break in formation and fatigued horses. Charges occurred across clear rising ground, and were effective against infantry both on the march and when deployed in a line or column. A foot battalion formed in line was vulnerable to cavalry, and could be broken or destroyed by a well-formed charge. Traditional cavalry functions altered by the end of the 19th century. Many cavalry units transferred in title and role to "mounted rifles": troops trained to fight on foot, but retaining mounts for rapid deployment, as well as for patrols, scouting, communications, and defensive screening. These troops differed from mounted infantry, who used horses for transport but did not perform the old cavalry roles of reconnaissance and support.
Sub-Saharan Africa: Horses were used for warfare in the central Sudan since the 9th century, where they were considered "the most precious commodity following the slave." The first conclusive evidence of horses playing a major role in the warfare of West Africa dates to the 11th century when the region was controlled by the Almoravids, a Muslim Berber dynasty. During the 13th and 14th centuries, cavalry became an important factor in the area. This coincided with the introduction of larger breeds of horse and the widespread adoption of saddles and stirrups. |
mil_tactics_continued_pretraining.csv | Horses in warfare | Many cavalry units transferred in title and role to "mounted rifles": troops trained to fight on foot, but retaining mounts for rapid deployment, as well as for patrols, scouting, communications, and defensive screening. These troops differed from mounted infantry, who used horses for transport but did not perform the old cavalry roles of reconnaissance and support.
Sub-Saharan Africa: Horses were used for warfare in the central Sudan since the 9th century, where they were considered "the most precious commodity following the slave." The first conclusive evidence of horses playing a major role in the warfare of West Africa dates to the 11th century when the region was controlled by the Almoravids, a Muslim Berber dynasty. During the 13th and 14th centuries, cavalry became an important factor in the area. This coincided with the introduction of larger breeds of horse and the widespread adoption of saddles and stirrups. Increased mobility played a part in the formation of new power centers, such as the Oyo Empire in what today is Nigeria. The authority of many African Islamic states such as the Bornu Empire also rested in large part on their ability to subject neighboring peoples with cavalry. Despite harsh climate conditions, endemic diseases such as trypanosomiasis, the African horse sickness, and unsuitable terrain that limited the effectiveness of horses in many parts of Africa, horses were continuously imported and were, in some areas, a vital instrument of war. The introduction of horses also intensified existing conflicts, such as those between the Herero and Nama people in Namibia during the 19th century.
The African slave trade was closely tied to the imports of war horses, and as the prevalence of slaving decreased, fewer horses were needed for raiding. This significantly decreased the amount of mounted warfare seen in West Africa. By the time of the Scramble for Africa and the introduction of modern firearms in the 1880s, the use of horses in African warfare had lost most of its effectiveness. Nonetheless, in South Africa during the Second Boer War (1899–1902), cavalry and other mounted troops were the major combat force for the British, since the horse-mounted Boers moved too quickly for infantry to engage. The Boers presented a mobile and innovative approach to warfare, drawing on strategies that had first appeared in the American Civil War. The terrain was not well-suited to the British horses, resulting in the loss of over 300,000 animals. As the campaign wore on, losses were replaced by more durable African Basuto ponies, and Waler horses from Australia.
The Americas: The horse had been extinct in the Western Hemisphere for approximately 10,000 years prior to the arrival of Spanish Conquistadors in the early 16th century. Consequently, the Indigenous peoples of the Americas had no warfare technologies that could overcome the considerable advantage provided by European horses and gunpowder weapons. In particular this resulted in the conquest of the Aztec and Inca empires. The speed and increased impact of cavalry contributed to a number of early victories by European fighters in open terrain, though their success was limited in more mountainous regions. The Incas' well-maintained roads in the Andes enabled quick mounted raids, such as those undertaken by the Spanish while resisting the siege of Cuzco in 1536–37.
Indigenous populations of South America soon learned to use horses. In Chile, the Mapuche began using cavalry in the Arauco War in 1586. They drove the Spanish out of Araucanía at the beginning of the 17th century. Later, the Mapuche conducted mounted raids known as Malónes, first on Spanish, then on Chilean and Argentine settlements until well into the 19th century. In North America, Native Americans also quickly learned to use horses. In particular, the people of the Great Plains, such as the Comanche and the Cheyenne, became renowned horseback fighters. By the 19th century, they presented a formidable force against the United States Army.
During the American Revolutionary War (1775–1783), the Continental Army made relatively little use of cavalry, primarily relying on infantry and a few dragoon regiments. The United States Congress eventually authorized regiments specifically designated as cavalry in 1855. The newly formed American cavalry adopted tactics based on experiences fighting over vast distances during the Mexican War (1846–1848) and against indigenous peoples on the western frontier, abandoning some European traditions.
During the American Civil War (1861–1865), cavalry held the most important and respected role it would ever hold in the American military. Field artillery in the American Civil War was also highly mobile. Both horses and mules pulled the guns, though only horses were used on the battlefield. At the beginning of the war, most of the experienced cavalry officers were from the South and thus joined the Confederacy, leading to the Confederate Army's initial battlefield superiority. The tide turned at the 1863 Battle of Brandy Station, part of the Gettysburg campaign, where the Union cavalry, in the largest cavalry battle ever fought on the American continent, ended the dominance of the South. By 1865, Union cavalry were decisive in achieving victory. So important were horses to individual soldiers that the surrender terms at Appomattox allowed every Confederate cavalryman to take his horse home with him. This was because, unlike their Union counterparts, Confederate cavalrymen provided their own horses for service instead of drawing them from the government.
20th century: Although cavalry was used extensively throughout the world during the 19th century, horses became less important in warfare at the beginning of the 20th century. Light cavalry was still seen on the battlefield, but formal mounted cavalry began to be phased out for combat during and immediately after World War I, although units that included horses still had military uses well into World War II.
World War I: World War I saw great changes in the use of cavalry. The mode of warfare changed, and the use of trench warfare, barbed wire and machine guns rendered traditional cavalry almost obsolete. Tanks, introduced in 1917, began to take over the role of shock combat.
Early in the War, cavalry skirmishes were common, and horse-mounted troops widely used for reconnaissance. On the Western Front cavalry were an effective flanking force during the "Race to the Sea" in 1914, but were less useful once trench warfare was established. There a few examples of successful shock combat, and cavalry divisions also provided important mobile firepower. Cavalry played a greater role on the Eastern Front, where trench warfare was less common. On the Eastern Front, and also against the Ottomans, the "cavalry was literally indispensable." British Empire cavalry proved adaptable, since they were trained to fight both on foot and while mounted, while other European cavalry relied primarily on shock action.
On both fronts, the horse was also used as a pack animal. Because railway lines could not withstand artillery bombardments, horses carried ammunition and supplies between the railheads and the rear trenches, though the horses generally were not used in the actual trench zone. This role of horses was critical, and thus horse fodder was the single largest commodity shipped to the front by some countries. Following the war, many cavalry regiments were converted to mechanised, armoured divisions, with light tanks developed to perform many of the cavalry's original roles.
World War II: Several nations used horse units during World War II. The Polish army used mounted infantry to defend against the armies of Nazi Germany during the 1939 invasion. Both the Germans and the Soviet Union maintained cavalry units throughout the war, particularly on the Eastern Front. The British Army used horses early in the war, and the final British cavalry charge was on March 21, 1942, when the Burma Frontier Force encountered Japanese infantry in central Burma. The only American cavalry unit during World War II was the 26th Cavalry. They challenged the Japanese invaders of Luzon, holding off armoured and infantry regiments during the invasion of the Philippines, repelled a unit of tanks in Binalonan, and successfully held ground for the Allied armies' retreat to Bataan.
Throughout the war, horses and mules were an essential form of transport, especially by the British in the rough terrain of Southern Europe and the Middle East. The United States Army utilised a few cavalry and supply units during the war, but there were concerns that the Americans did not use horses often enough. In the campaigns in North Africa, generals such as George S. Patton lamented their lack, saying, "had we possessed an American cavalry division with pack artillery in Tunisia and in Sicily, not a German would have escaped."
The German and the Soviet armies used horses until the end of the war for transportation of troops and supplies. The German Army, strapped for motorised transport because its factories were needed to produce tanks and aircraft, used around 2.75 million horses – more than it had used in World War I. One German infantry division in Normandy in 1944 had 5,000 horses. The Soviets used 3.5 million horses.
Recognition: While many statues and memorials have been erected to human heroes of war, often shown with horses, a few have also been created specifically to honor horses or animals in general. One example is the Horse Memorial in Port Elizabeth in the Eastern Cape province of South Africa. Both horses and mules are honored in the Animals in War Memorial in London's Hyde Park.
Horses have also at times received medals for extraordinary deeds. |
mil_tactics_continued_pretraining.csv | Horses in warfare | The German and the Soviet armies used horses until the end of the war for transportation of troops and supplies. The German Army, strapped for motorised transport because its factories were needed to produce tanks and aircraft, used around 2.75 million horses – more than it had used in World War I. One German infantry division in Normandy in 1944 had 5,000 horses. The Soviets used 3.5 million horses.
Recognition: While many statues and memorials have been erected to human heroes of war, often shown with horses, a few have also been created specifically to honor horses or animals in general. One example is the Horse Memorial in Port Elizabeth in the Eastern Cape province of South Africa. Both horses and mules are honored in the Animals in War Memorial in London's Hyde Park.
Horses have also at times received medals for extraordinary deeds. After the Charge of the Light Brigade during the Crimean War, a surviving horse named Drummer Boy, ridden by an officer of the 8th Hussars, was given an unofficial campaign medal by his rider that was identical to those awarded to British troops who served in the Crimea, engraved with the horse's name and an inscription of his service. A more formal award was the PDSA Dickin Medal, an animals' equivalent of the Victoria Cross, awarded by the People's Dispensary for Sick Animals charity in the United Kingdom to three horses that served in World War II.
Modern uses: Today, many of the historical military uses of the horse have evolved into peacetime applications, including exhibitions, historical reenactments, work of peace officers, and competitive events. Formal combat units of mounted cavalry are mostly a thing of the past, with horseback units within the modern military used for reconnaissance, ceremonial, or crowd control purposes. With the rise of mechanised technology, horses in formal national militias were displaced by tanks and armored fighting vehicles, often still referred to as "cavalry".
Active military: Organised armed fighters on horseback are occasionally seen. The best-known current examples are the Janjaweed, militia groups seen in the Darfur region of Sudan, who became notorious for their attacks upon unarmed civilian populations in the Darfur conflict. Many nations still maintain small numbers of mounted military units for certain types of patrol and reconnaissance duties in extremely rugged terrain, including the conflict in Afghanistan.
At the beginning of Operation Enduring Freedom, Operational Detachment Alpha 595 teams were covertly inserted into Afghanistan on October 19, 2001. Horses were the only suitable method of transport in the difficult mountainous terrain of Northern Afghanistan. They were the first U.S. soldiers to ride horses into battle since January 16, 1942, when the U.S. Army’s 26th Cavalry Regiment charged an advanced guard of the 14th Japanese Army as it advanced from Manila.
The only remaining operationally ready, fully horse-mounted regular regiment in the world is the Indian Army's 61st Cavalry.
Law enforcement and public safety: Mounted police have been used since the 18th century, and still are used worldwide to control traffic and crowds, patrol public parks, keep order in processionals and during ceremonies and perform general street patrol duties. Today, many cities still have mounted police units. In rural areas, horses are used by law enforcement for mounted patrols over rugged terrain, crowd control at religious shrines, and border patrol.
In rural areas, law enforcement that operates outside of incorporated cities may also have mounted units. These include specially deputised, paid or volunteer mounted search and rescue units sent into roadless areas on horseback to locate missing people. Law enforcement in protected areas may use horses in places where mechanised transport is difficult or prohibited. Horses can be an essential part of an overall team effort as they can move faster on the ground than a human on foot, can transport heavy equipment, and provide a more rested rescue worker when a subject is found.
Ceremonial and educational uses: Many countries throughout the world maintain traditionally trained and historically uniformed cavalry units for ceremonial, exhibition, or educational purposes. One example is the Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division. This unit of active duty soldiers approximates the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s. It is seen at change of command ceremonies and other public appearances. A similar detachment is the Governor General's Horse Guards, Canada's Household Cavalry regiment, the last remaining mounted cavalry unit in the Canadian Forces. Nepal's King's Household Cavalry is a ceremonial unit with over 100 horses and is the remainder of the Nepalese cavalry that existed since the 19th century. An important ceremonial use is in military funerals, which often have a caparisoned horse as part of the procession, "to symbolize that the warrior will never ride again".
Horses are also used in many historical reenactments. Reenactors try to recreate the conditions of the battle or tournament with equipment that is as authentic as possible.
Equestrian sport: Modern-day Olympic equestrian events are rooted in cavalry skills and classical horsemanship. The first equestrian events at the Olympics were introduced in 1912, and through 1948, competition was restricted to active-duty officers on military horses. Only after 1952, as mechanisation of warfare reduced the number of military riders, were civilian riders allowed to compete. Dressage traces its origins to Xenophon and his works on cavalry training methods, developing further during the Renaissance in response to a need for different tactics in battles where firearms were used. The three-phase competition known as Eventing developed out of cavalry officers' needs for versatile, well-schooled horses. Though show jumping developed largely from fox hunting, the cavalry considered jumping to be good training for their horses, and leaders in the development of modern riding techniques over fences, such as Federico Caprilli, came from military ranks. Beyond the Olympic disciplines are other events with military roots. Competitions with weapons, such as mounted shooting and tent pegging, test the combat skills of mounted riders.
See also: Equestrianism
Great Stirrup Controversy
List of historical military horses
Notes:
References:
Sources:
Further reading: Barton, P. G. (2019), "The Medieval Powys Warhorse", Montgomeryshire Collections, 107
Hacker, Barton C. (August 1997). "Military Technology and World History: A Reconnaissance". The History Teacher. 30 (4): 461–487. doi:10.2307/494141. JSTOR 494141.
Harrison, Sunny (2022). "How to make a warhorse: violence and behavioural control in late medieval hippiatric treatises". Journal of Medieval History.
External links: The Institute for Ancient Equestrian Studies (IAES)
The Society of the Military Horse
Historic films showing horses in World War I at europeanfilmgateway.eu
Warhorse: the archaeology of a medieval revolution?, AHRC funded research project by the University of Exeter and the University of East Anglia |
mil_tactics_continued_pretraining.csv | Humanitarian aid | Types:
Food aid: Food aid is a type of aid whereby food that is given to countries in urgent need of food supplies, especially if they have just experienced a natural disaster. Food aid can be provided by importing food from the donor, buying food locally, or providing cash.
The welfare impacts of any food aid-induced changes in food prices are decidedly mixed, underscoring the reality that it is impossible to generate only positive intended effects from an international aid program.
Changed consumption patterns: Food aid that is relatively inappropriate to local uses can distort consumption patterns. Food aid is usually exported from temperate climate zones and is often different than the staple crops grown in recipient countries, which usually have a tropical climate. The logic of food export inherently entails some effort to change consumers' preferences, to introduce recipients to new foods and thereby stimulate demand for foods with which recipients were previously unfamiliar or which otherwise represent only a small portion of their diet.
Massive shipments of wheat and rice into the West African Sahel during the food crises of the mid-1970s and mid-1980s were widely believed to stimulate a shift in consumer demand from indigenous coarse grains – millet and sorghum – to western crops such as wheat. During the 2000 drought in northern Kenya, the price of changaa (a locally distilled alcohol) fell significantly and consumption seems to have increased as a result. This was a result of grain food aid inflows increasing the availability of low-cost inputs to the informal distilling industry.
Natural resource overexploitation: Recent research suggests that patterns of food aid distribution may inadvertently affect the natural environment, by changing consumption patterns and by inducing locational change in grazing and other activities. A pair of studies in Northern Kenya found that food aid distribution seems to induce greater spatial concentration of livestock around distribution points, causing localized rangeland degradation, and that food aid provided as whole grain requires more cooking, and thus more fuelwood, stimulating local deforestation.
Medical humanitarian aid: There are different kinds of medical humanitarian aid, including: providing medical supplies and equipment; sending professionals to an affected region; and, long-term training for local medical staff. Such aid emerged when international organizations stepped in to respond to the need of national governments for global support and partnership to address natural disasters, wars, and other crises that impact people's health. Often, a humanitarian aid organization would clash with a government's approach to the unfolding domestic conflict. In such cases, humanitarian aid organizations have sought out autonomy to extend help regardless of political or ethnic affiliation.
Limitations: Humanitarian medical aid as a sector possesses several limitations. First, multiple organizations often exist to solve the same problem. Rather than collaborating to address a given situation, organizations frequently interact as competitors, which creates bottlenecks of treatment and supplies. A second limitation is how humanitarian organizations are focused on a specific disaster or epidemic, without a plan for whatever might come next; international organizations frequently enter a region, provide short term aid, and then exit without ensuring local capacity to maintain or sustain this medical care. Finally, humanitarian medical aid assumes a biomedical approach which does not always account for the alternative beliefs and practices about health and well-being in the affected regions. This problem is rarely explored as most studies conducted are done from the lens of the donor or Westernized humanitarian organization rather than the recipient country's perspective. Discovering ways of encouraging locals to embrace bio-medicine approaches while simultaneously respecting a given people's culture and beliefs remains a major challenge for humanitarian aid organizations; in particular as organizations constantly enter new regions as crises occur. However, understanding how to provide aid cohesively with existing regional approaches is necessary in securing the local peoples' acceptance of the humanitarian aid's work.
Funding sources: Aid is funded by donations from individuals, corporations, governments and other organizations. The funding and delivery of humanitarian aid is increasingly international, making it much faster, more responsive, and more effective in coping to major emergencies affecting large numbers of people (e.g. see Central Emergency Response Fund). The United Nations Office for the Coordination of Humanitarian Affairs (OCHA) coordinates the international humanitarian response to a crisis or emergency pursuant to Resolution 46/182 of the United Nations General Assembly. The need for aid is ever-increasing and has long outstripped the financial resources available.
The Central Emergency Response Fund was created at the 2005 Central Emergency Response Fund at the United Nations General Assembly.
Delivery of humanitarian aid:
Methods of delivery: Humanitarian aid spans a wide range of activities, including providing food aid, shelter, education, healthcare or protection. The majority of aid is provided in the form of in-kind goods or assistance, with cash and vouchers constituting only 6% of total humanitarian spending. However, evidence has shown how cash transfers can be better for recipients as it gives them choice and control, they can be more cost-efficient and better for local markets and economies.
It is important to note that humanitarian aid is not only delivered through aid workers sent by bilateral, multilateral or intergovernmental organizations, such as the United Nations. Actors like the affected people themselves, civil society, local informal first-responders, civil society, the diaspora, businesses, local governments, military, local and international non-governmental organizations all play a crucial role in a timely delivery of humanitarian aid.
How aid is delivered can affect the quality and quantity of aid. Often in disaster situations, international aid agencies work in hand with local agencies. There can be different arrangements on the role these agencies play, and such arrangement affects that quality of hard and soft aid delivered.
Humanitarian access: Securing access to humanitarian aid in post-disasters, conflicts, and complex emergencies is a major concern for humanitarian actors. To win assent for interventions, aid agencies often espouse the principles of humanitarian impartiality and neutrality. However, gaining secure access often involves negotiation and the practice of humanitarian diplomacy. In the arena of negotiations, humanitarian diplomacy is ostensibly used by humanitarian actors to try to persuade decision makers and leaders to act, at all times and in all circumstances, in the interest of vulnerable people and with full respect for fundamental humanitarian principles. However, humanitarian diplomacy is also used by state actors as part of their foreign policy.
United Nations' response: The UN implements a multifaceted approach to assist migrants and refugees throughout their relocation process. This includes children's integration into the local education system, food security, and access to health services. The approach also encompasses humanitarian transportation, the goal of which is to ensure migrants and refugees retain access to basic goods and services and the labour market. Basic needs, including access to shelter, clean water, and child protection, are supplemented by the UN's efforts to facilitate social integration and legal regularization for displaced individuals.
Use of technology and data: Since the 2010 Haiti Earthquake, the institutional and operational focus of humanitarian aid has been on leveraging technology to enhance humanitarian action, ensuring that more formal relationships are established, and improving the interaction between formal humanitarian organizations such as the United Nations (UN) Office for the Coordination of Humanitarian Affairs (OCHA) and informal volunteer and technological communities known as digital humanitarians.
The recent rise in Big Data, high-resolution satellite imagery and new platforms powered by advanced computing have already prompted the development of crisis mapping to help humanitarian organizations make sense of the vast volume and velocity of information generated during disasters. For example, crowdsourcing maps (such as OpenStreetMap) and social media messages in Twitter were used during the 2010 Haiti Earthquake and Hurricane Sandy to trace leads of missing people, infrastructure damages and rise new alerts for emergencies.
Gender and humanitarian aid: Even prior to a humanitarian crisis, gender differences exist. Women have limited access to paid work, are at risk of child marriage, and are more exposed to Gender based violence, such as rape and domestic abuse. Conflict and natural disasters exacerbate women's vulnerabilities. When delivering humanitarian aid, it is thus important for humanitarian actors, such as the United Nations, to include challenges specific to women in their humanitarian response. The Inter-Agency Standing Committee provides guidelines for humanitarian actors on how be inclusive of gender when delivering humanitarian aid. It recommends agencies to collect data disaggregated by sex and age to better understand which group of the population is in need of what type of aid. In recent years, the United Nations have been using sex and age disaggregated data more and more, consulting with gender specialists. In the assessment phase, several UN agencies meet to compile data and work on a humanitarian response plan. Throughout the plans. women specific challenges are listed and sex and age disaggregated data are used so when they deliver aid to a country facing a humanitarian crisis, girls and women can have access to the aid they need.
Problematic aspects:
Economic distortions due to food aid: Some of the unintended effects of food aid include labor and production disincentives, changes in recipients' food consumption patterns and natural resources use patterns, distortion of social safety nets, distortion of NGO operational activities, price changes, and trade displacement. These issues arise from targeting inefficacy and poor timing of aid programs. Food aid can harm producers by driving down prices of local products, whereas the producers are not themselves beneficiaries of food aid. Unintentional harm occurs when food aid arrives or is purchased at the wrong time, when food aid distribution is not well-targeted to food-insecure households, and when the local market is relatively poorly integrated with broader national, regional and global markets.
Food aid can drive down local or national food prices in at least three ways.
First, monetization of food aid can flood the market, increasing supply. |
mil_tactics_continued_pretraining.csv | Humanitarian aid | Problematic aspects:
Economic distortions due to food aid: Some of the unintended effects of food aid include labor and production disincentives, changes in recipients' food consumption patterns and natural resources use patterns, distortion of social safety nets, distortion of NGO operational activities, price changes, and trade displacement. These issues arise from targeting inefficacy and poor timing of aid programs. Food aid can harm producers by driving down prices of local products, whereas the producers are not themselves beneficiaries of food aid. Unintentional harm occurs when food aid arrives or is purchased at the wrong time, when food aid distribution is not well-targeted to food-insecure households, and when the local market is relatively poorly integrated with broader national, regional and global markets.
Food aid can drive down local or national food prices in at least three ways.
First, monetization of food aid can flood the market, increasing supply. In order to be granted the right to monetize, operational agencies must demonstrate that the recipient country has adequate storage facilities and that the monetized commodity will not result in a substantial disincentive in either domestic agriculture or domestic marketing.
Second, households receiving aid may decrease demand for the commodity received or for locally produced substitutes or, if they produce substitutes or the commodity received, they may sell more of it. This can be most easily understood by dividing a population in a food aid recipient area into subpopulations based on two criteria: whether or not they receive food aid (recipients vs. non-recipients) and whether they are net sellers or net buyers of food. Because the price they receive for their output is lower, however, net sellers are unambiguously worse off if they do not receive food aid or some other form of compensatory transfer.
Finally, recipients may sell food aid to purchase other necessities or complements, driving down prices of the food aid commodity and its substitutes, but also increasing demand for complements. Most recipient economies are not robust and food aid inflows can cause large price decreases, decreasing producer profits, limiting producers' abilities to pay off debts and thereby diminishing both capacity and incentives to invest in improving agricultural productivity. However, food aid distributed directly or through FFW programs to households in northern Kenya during the lean season can foster increased purchase of agricultural inputs such as improved seeds, fertilizer and hired labor, thereby increasing agricultural productivity.Labor distortion can arise when Food-For-Work (FFW) Programs are more attractive than work on recipients' own farms/businesses, either because the FFW pays immediately, or because the household considers the payoffs to the FFW project to be higher than the returns to labor on its own plots. Food aid programs hence take productive inputs away from local private production, creating a distortion due to substitution effects, rather than income effects.
Beyond labor disincentive effects, food aid can have the unintended consequence of discouraging household-level production. Poor timing of aid and FFW wages that are above market rates cause negative dependency by diverting labor from local private uses, particularly if FFW obligations decrease labor on a household's own enterprises during a critical part of the production cycle. This type of disincentive impacts not only food aid recipients but also producers who sell to areas receiving food aid flows.
FFW programs are often used to counter a perceived dependency syndrome associated with freely distributed food. However, poorly designed FFW programs may cause more risk of harming local production than the benefits of free food distribution. In structurally weak economies, FFW program design is not as simple as determining the appropriate wage rate. Empirical evidence from rural Ethiopia shows that higher-income households had excess labor and thus lower (not higher as expected) value of time, and therefore allocated this labor to FFW schemes in which poorer households could not afford to participate due to labor scarcity. Similarly, FFW programs in Cambodia have shown to be an additional, not alternative, source of employment and that the very poor rarely participate due to labor constraints.
Increasing existing conflicts: In addition to post-conflict settings, a large portion of aid is often directed at countries currently undergoing conflicts. However, the effectiveness of humanitarian aid, particularly food aid, in conflict-prone regions has been criticized in recent years. There have been accounts of humanitarian aid being not only inefficacious but actually fuelling conflicts in the recipient countries. Aid stealing is one of the prime ways in which conflict is promoted by humanitarian aid. Aid can be seized by armed groups, and even if it does reach the intended recipients, "it is difficult to exclude local members of a local militia group from being direct recipients if they are also malnourished and qualify to receive aid."
Furthermore, analyzing the relationship between conflict and food aid, recent research shows that the United States food aid promoted civil conflict in recipient countries on average. An increase in United States' wheat aid increased the duration of armed civil conflicts in recipient countries, and ethnic polarization heightened this effect. However, since academic research on aid and conflict focuses on the role of aid in post-conflict settings, the aforementioned finding is difficult to contextualize. Nevertheless, research on Iraq shows that "small-scale [projects], local aid spending ... reduces conflict by creating incentives for average citizens to support the government in subtle ways." Similarly, another study also shows that aid flows can "reduce conflict because increasing aid revenues can relax government budget constraints, which can [in return] increase military spending and deter opposing groups from engaging in conflict." Thus, the impact of humanitarian aid on conflict may vary depending upon the type and mode in which aid is received, and, inter alia, the local socio-economic, cultural, historical, geographical and political conditions in the recipient countries.
Increasing conflict duration: International aid organizations identify theft by armed forces on the ground as a primary unintended consequence through which food aid and other types of humanitarian aid promote conflict. Food aid usually has to be transported across large geographic territories and during the transportation it becomes a target for armed forces, especially in countries where the ruling government has limited control outside of the capital. Accounts from Somalia in the early 1990s indicate that between 20 and 80 percent of all food aid was stolen, looted, or confiscated. In the former Yugoslavia, the UN Refugee Agency (UNHCR) lost up to 30 percent of the total value of aid to Serbian armed forces. On top of that 30 percent, bribes were given to Croatian forces to pass their roadblocks in order to reach Bosnia.
The value of the stolen or lost provisions can exceed the value of the food aid alone since convoy vehicles and telecommunication equipment are also stolen. MSF Holland, international aid organization operating in Chad and Darfur, underscored the strategic importance of these goods, stating that these "vehicles and communications equipment have a value beyond their monetary worth for armed actors, increasing their capacity to wage war"
A famous instance of humanitarian aid unintentionally helping rebel groups occurred during the Nigeria-Biafra civil war in the late 1960s, where the rebel leader Odumegwu Ojukwu only allowed aid to enter the region of Biafra if it was shipped on his planes. These shipments of humanitarian aid helped the rebel leader to circumvent the siege on Biafra placed by the Nigerian government. These stolen shipments of humanitarian aid caused the Biafran civil war to last years longer than it would have without the aid, claim experts.
The most well-known instances of aid being seized by local warlords in recent years come from Somalia, where food aid is funneled to the Shabab, a Somali militant group that controls much of Southern Somalia. Moreover, reports reveal that Somali contractors for aid agencies have formed a cartel and act as important power brokers, arming opposition groups with the profits made from the stolen aid"
Rwandan government appropriation of food aid in the early 1990s was so problematic that aid shipments were canceled multiple times. In Zimbabwe in 2003, Human Rights Watch documented examples of residents being forced to display ZANU-PF Party membership cards before being given government food aid. In eastern Zaire, leaders of the Hema ethnic group allowed the arrival of international aid organizations only upon agreement not give aid to the Lendu (opposition of Hema). Humanitarian aid workers have acknowledged the threat of stolen aid and have developed strategies for minimizing the amount of theft en route. However, aid can fuel conflict even if successfully delivered to the intended population as the recipient populations often include members of rebel groups or militia groups, or aid is "taxed" by such groups.
Academic research emphatically demonstrates that on average food aid promotes civil conflict. Namely, increase in US food aid leads to an increase in the incidence of armed civil conflict in the recipient country. Another correlation demonstrated is food aid prolonging existing conflicts, specifically among countries with a recent history of civil conflict. However, this does not find an effect on conflict in countries without a recent history of civil conflict. Moreover, different types of international aid other than food which is easily stolen during its delivery, namely technical assistance and cash transfers, can have different effects on civil conflict.
Community-driven development (CDD) programs have become one of the most popular tools for delivering development aid. In 2012, the World Bank supported 400 CDD programs in 94 countries, valued at US$30 billion. Academic research scrutinizes the effect of community-driven development programs on civil conflict. The Philippines' flagship development program KALAHI-CIDSS is concluded to have led to an increase in violent conflict in the country. After the program's start, some municipalities experienced and statistically significant and large increase in casualties, as compared to other municipalities who were not part of the CDD. |
mil_tactics_continued_pretraining.csv | Humanitarian aid | However, this does not find an effect on conflict in countries without a recent history of civil conflict. Moreover, different types of international aid other than food which is easily stolen during its delivery, namely technical assistance and cash transfers, can have different effects on civil conflict.
Community-driven development (CDD) programs have become one of the most popular tools for delivering development aid. In 2012, the World Bank supported 400 CDD programs in 94 countries, valued at US$30 billion. Academic research scrutinizes the effect of community-driven development programs on civil conflict. The Philippines' flagship development program KALAHI-CIDSS is concluded to have led to an increase in violent conflict in the country. After the program's start, some municipalities experienced and statistically significant and large increase in casualties, as compared to other municipalities who were not part of the CDD. Casualties suffered by government forces as a result of insurgent-initiated attacks increased significantly.
These results are consistent with other examples of humanitarian aid exacerbating civil conflict. One explanation is that insurgents attempt to sabotage CDD programs for political reasons – successful implementation of a government-supported project could weaken the insurgents' position. Related findings of Beath, Christia, and Enikolopov further demonstrate that a successful community-driven development program increased support for the government in Afghanistan by exacerbating conflict in the short term, revealing an unintended consequence of the aid.
Waste and corruption in humanitarian aid: Waste and corruption are hard to quantify, in part because they are often taboo subjects, but they appear to be significant in humanitarian aid. For example, it has been estimated that over $8.75 billion was lost to waste, fraud, abuse and mismanagement in the Hurricane Katrina relief effort. Non-governmental organizations have in recent years made great efforts to increase participation, accountability and transparency in dealing with aid, yet humanitarian assistance remains a poorly understood process to those meant to be receiving it—much greater investment needs to be made into researching and investing in relevant and effective accountability systems.
However, there is no clear consensus on the trade-offs between speed and control, especially in emergency situations when the humanitarian imperative of saving lives and alleviating suffering may conflict with the time and resources required to minimise corruption risks. Researchers at the Overseas Development Institute have highlighted the need to tackle corruption with, but not limited to, the following methods:
Resist the pressure to spend aid rapidly.
Continue to invest in audit capacity, beyond simple paper trails;
Establish and verify the effectiveness of complaints mechanisms, paying close attention to local power structures, security and cultural factors hindering complaints;
Clearly explain the processes during the targeting and registration stages, highlighting points such as the fact that people should not make payments to be included, photocopy and read aloud any lists prepared by leaders or committees.
Abuse of power by aid workers: Reports of sexual exploitation and abuse in humanitarian response have been reported following humanitarian interventions in Liberia, Guinea and Sierra Leone in 2002, in Central African Republic and in the Democratic Republic of the Congo.
2021 reporting on a Racial Equity Index report indicated that just under two-thirds of aid workers have experienced racism and 98% of survey respondents witnessed racism.
Contrary practice: Countries or war parties that prevent humanitarian relief are generally under unanimous criticism. Such was the case for the Derg regime, preventing relief to the population of Tigray in the 1980s, and the prevention of relief aid in the Tigray War of 2020–2021 by the Abiy Ahmed Ali regime of Ethiopia was again widely condemned.
Humanitarian aid in conflict zones:
Aid workers: Aid workers are people who are distributed internationally to do humanitarian aid work.
Composition: The total number of humanitarian aid workers around the world has been calculated by ALNAP, a network of agencies working in the Humanitarian System, as 210,800 in 2008. This is made up of roughly 50% from NGOs, 25% from the Red Cross/Red Crescent Movement and 25% from the UN system. In 2010, it was reported that the humanitarian fieldworker population increased by approximately 6% per year over the previous 10 years.
Psychological issues: Aid workers are exposed to tough conditions and have to be flexible, resilient, and responsible in an environment that humans are not psychologically supposed to deal with, in such severe conditions that trauma is common. In recent years, a number of concerns have been raised about the mental health of aid workers.
The most prevalent issue faced by humanitarian aid workers is post-traumatic stress disorder (PTSD). Adjustment to normal life again can be a problem, with feelings such as guilt being caused by the simple knowledge that international aid workers can leave a crisis zone, whilst nationals cannot.
A 2015 survey conducted by The Guardian, with aid workers of the Global Development Professionals Network, revealed that 79 percent experienced mental health issues.
Attacks:
Standards: The humanitarian community has initiated a number of interagency initiatives to improve accountability, quality and performance in humanitarian action. Four of the most widely known initiatives are, ALNAP, the CHS Alliance, the Sphere Project and the Core Humanitarian Standard on Quality and Accountability (CHS). Representatives of these initiatives began meeting together on a regular basis in 2003 in order to share common issues and harmonise activities where possible.
Sphere Project: The Sphere Project handbook, Humanitarian Charter and Minimum Standards in Disaster Response, which was produced by a coalition of leading non-governmental humanitarian agencies, lists the following principles of humanitarian action:
The right to life with dignity
The distinction between combatant and non-combatants
The principle of non-refoulement
Core Humanitarian Standard on Quality and Accountability: Another humanitarian standard used is the Core Humanitarian Standard on Quality and Accountability (CHS). It was approved by the CHS Technical Advisory Group in 2014, and has since been endorsed by many humanitarian actors such as "the Boards of the Humanitarian Accountability Partnership (HAP), People in Aid and the Sphere Project". It comprises nine core standards, which are complemented by detailed guidelines and indicators.
While some critics were questioning whether the sector will truly benefit from the implementation of yet another humanitarian standard, others have praised it for its simplicity. Most notably, it has replaced the core standards of the Sphere Handbook and it is regularly referred to and supported by officials from the United Nations, the EU, various NGOs and institutes.
History:
Origins: The beginnings of organized international humanitarian aid can be traced to the late 19th century. Early campaigns include British aid to distressed populations on the continent and in Sweden during the Napoleonic Wars, and the international relief campaigns during the Great Irish Famine in the 1840s.
In 1854, when the Crimean War began Florence Nightingale and her team of 38 nurses arrived to Barracks Hospital of Scutari where there were thousands of sick and wounded soldiers. Nightingale and her team watched as the understaffed military hospitals struggled to maintain hygienic conditions and meet the needs of patients. Ten times more soldiers were dying of disease than from battle wounds. Typhus, typhoid, cholera and dysentery were common in the army hospitals. Nightingale and her team established a kitchen, laundry and increased hygiene. More nurses arrived to aid in the efforts and the General Hospital at Scutari was able to care for 6,000 patients.
Nightingale's contributions still influence humanitarian aid efforts. This is especially true in regard to Nightingale's use of statistics and measures of mortality and morbidity. Nightingale used principles of new science and statistics to measure progress and plan for her hospital. She kept records of the number and cause of deaths in order to continuously improve the conditions in hospitals. Her findings were that in every 1,000 soldiers, 600 were dying of communicable and infectious diseases. She worked to improve hygiene, nutrition and clean water and decreased the mortality rate from 60% to 42% to 2.2%. All of these improvements are pillars of modern humanitarian intervention. Once she returned to Great Britain she campaigned for the founding of the Royal Commission on the Health of the Army. She advocated for the use of statistics and coxcombs to portray the needs of those in conflict settings.
The most well-known origin story of formalized humanitarian aid is that of Henri Dunant, a Swiss businessman and social activist, who upon seeing the sheer destruction and inhumane abandonment of wounded soldiers from the Battle of Solferino in June 1859, canceled his plans and began a relief response. Despite little to no experience as a medical physician, Dunant worked alongside local volunteers to assist the wounded soldiers from all warring parties, including Austrian, Italian and French casualties, in any way he could including the provision of food, water, and medical supplies. His graphic account of the immense suffering he witnessed, written in his book A Memory of Solferino, became a foundational text to modern humanitarianism.
A Memory of Solferino changed the world in a way that no one, let alone Dunant, could have foreseen nor truly appreciated at the time. To start, Dunant was able to profoundly stir the emotions of his readers by bringing the battle and suffering into their homes, equipping them to understand the current barbaric state of war and treatment of soldiers after they were injured or killed; in of themselves these accounts altered the course of history. Beyond this, in his two-week experience attending to the wounded soldiers of all nationalities, Dunant inadvertently established the vital conceptual pillars of what would later become the International Committee of the Red Cross and International Humanitarian Law: impartiality and neutrality. |
mil_tactics_continued_pretraining.csv | Humanitarian aid | His graphic account of the immense suffering he witnessed, written in his book A Memory of Solferino, became a foundational text to modern humanitarianism.
A Memory of Solferino changed the world in a way that no one, let alone Dunant, could have foreseen nor truly appreciated at the time. To start, Dunant was able to profoundly stir the emotions of his readers by bringing the battle and suffering into their homes, equipping them to understand the current barbaric state of war and treatment of soldiers after they were injured or killed; in of themselves these accounts altered the course of history. Beyond this, in his two-week experience attending to the wounded soldiers of all nationalities, Dunant inadvertently established the vital conceptual pillars of what would later become the International Committee of the Red Cross and International Humanitarian Law: impartiality and neutrality. Dunant took these ideas and came up with two more ingenious concepts that would profoundly alter the practice of war; first Dunant envisioned a creation of permanent volunteer relief societies, much like the ad hoc relief group he coordinated in Solferino, to assist wounded soldiers; next Dunant began an effort to call for the adoption of a treaty which would guarantee the protection of wounded soldiers and any who attempted to come to their aid.
After publishing his foundational text in 1862, progress came quickly for Dunant and his efforts to create a permanent relief society and International Humanitarian Law. The embryonic formation of the International Committee of the Red Cross had begun to take shape in 1863 when the private Geneva Society of Public Welfare created a permanent sub-committee called "The International Committee for Aid to Wounded in Situations of War". Composed of five Geneva citizens, this committee endorsed Dunant's vision to legally neutralize medical personnel responding to wounded soldiers. The constitutive conference of this committee in October 1863 created the statutory foundation of the International Committee of the Red Cross in their resolutions regarding national societies, caring for the wounded, their symbol, and most importantly the indispensable neutrality of ambulances, hospitals, medical personnel and the wounded themselves. Beyond this, in order to solidify humanitarian practice, the Geneva Society of Public Welfare hosted a convention between 8 and 22 August 1864 at the Geneva Town Hall with 16 diverse States present, including many governments of Europe, the Ottoman Empire, the United States of America (USA), Brazil and Mexico. This diplomatic conference was exceptional, not due to the number or status of its attendees but rather because of its very raison d'être. Unlike many diplomatic conferences before it, this conference's purpose was not to reach a settlement after a conflict nor to mediate between opposing interests; indeed this conference was to lay down rules for the future of conflict with aims to protect medical services and those wounded in battle.
The first of the renowned Geneva Conventions was signed on 22 August 1864; never before in history has a treaty so greatly impacted how warring parties engage with one another. The basic tenents of the convention outlined the neutrality of medical services, including hospitals, ambulances, and related personnel, the requirement to care for and protect the sick and wounded during the conflict and something of particular symbolic importance to the International Committee of the Red Cross: the Red Cross emblem. For the first time in contemporary history, it was acknowledged by a representative selection of states that war had limits. The significance only grew with time in the revision and adaptation of the Geneva Convention in 1906, 1929 and 1949; additionally, supplementary treaties granted protection to hospital ships, prisoners of war and most importantly to civilians in wartime.
The International Committee of the Red Cross exists to this day as the guardian of International Humanitarian Law and as one of the largest providers of humanitarian aid in the world.
Late 19th century: Internationally organized humanitarian aid efforts continued to be launched for the rest of the century, often with ever-greater logistical acumen and experience. In 1876, after a drought led to cascading crop failures across Northern China, a famine broke out that lasted several years—during its course as many as 10 million people may have died from hunger and disease. British missionary Timothy Richard first called international attention to the famine in Shandong in the summer of 1876 and appealed to the foreign community in Shanghai for money to help the victims. The Shandong Famine Relief Committee was soon established, with those participating including diplomats, businessmen, as well as Christian missionaries, Catholic and Protestant alike. An international network was set up to solicit donations, ultimately bringing in 204,000 silver taels, the equivalent of $7–10 million if valued at 2012 silver prices.
Simultaneously in India, another campaign was launched in response to the Great Famine of 1876–78. Retrospectively, authorities from across the administrative and colonial structures of the British Raj and princely states have been to various degrees blamed for the shocking severity of the famine, with critiques revolving around their laissez-faire attitude and the resulting lack of any adequate policy to address the mass death and suffering across the subcontinent, though meaningful relief measures began to be introduced towards the famine's end. Privately, a famine relief fund was set up in the United Kingdom, raising £426,000 within its first few months of operation.
Early 20th century: Intertwined with and informed efforts related to the profound destruction and disruption caused by World War I, including that of the Red Cross and Red Crescent organization, the Russian famine of 1921–1922, taking place in a country already immensely burdened with systemic agriculture and logistical struggles—then ravaged by successive periods of industrial war, blockade, bad harvests, the Russian Revolution, its resulting political restructuring and social upheaval, and then the insurgency and war communism of the Russian Civil War that followed. In the nascent Russian Soviet Federative Socialist Republic, Vladimir Lenin allowed his personal friend and acclaimed thinker Maxim Gorky to pen an open letter to the international community asking for relief for the Russian people. Despite the ongoing ideological, material, and military conflicts levied by both the new socialist state and the capitalist international community towards one another, efforts to aid the starving population of Soviet Russia were intensive, deliberate, and effective. American efforts, led in large part future president Herbert Hoover, as well as those by the International Committee for Russian Relief joined extant humanitarian organizations in delivering food and medicine to Russia over the course of 1921 and 1922, at some points feeding over 10 millions Russians every day. With the United States left relatively untouched by World War I, its intensive private and public efforts in Russia constituted a clear expression of its new paramount soft power on the international stage, with power projection from European states having been either totally destroyed or severely limited in scope in the years following the conflict.
1980s: Early attempts were in private hands and were limited in their financial and organizational capabilities. It was only in the 1980s, that global news coverage and celebrity endorsement were mobilized to galvanize large-scale government-led famine (and other forms of) relief in response to disasters around the world. The 1983–85 famine in Ethiopia caused upwards of 1 million deaths and was documented by a BBC news crew, with Michael Buerk describing "a biblical famine in the 20th Century" and "the closest thing to hell on Earth".
Live Aid, a 1985 fund-raising effort headed by Bob Geldof induced millions of people in the West to donate money and to urge their governments to participate in the relief effort in Ethiopia. Some of the proceeds also went to the famine hit areas of Eritrea.
2000s: A 2004 reform initiative by Jan Egeland, resulted in the creation of the Humanitarian Cluster System, designed to improve coordination between humanitarian agencies working on the same issues.
2010s:
World Humanitarian Summit: The first global summit on humanitarian diplomacy was held in 2016 in Istanbul, Turkey. An initiative of United Nations Secretary-General Ban Ki-moon, the World Humanitarian Summit included participants from governments, civil society organizations, private organizations, and groups affected by humanitarian need. Issues that were discussed included: preventing and ending conflict, managing crises, and aid financing.
Attendees at the summit agreed a series of reforms on aid spending called the Grand Bargain, including a commitment to spend 25% of aid funds directly through local and national humanitarian aid organizations.
COVID-19 Pandemic
Following the outburst of the COVID-19 pandemic in 2019, approximately 216 million individuals required humanitarian aid across 69 countries. Many efforts and reforms of humanitarian assistance were made following the pandemic to the Covid-19 pandemic.
2020s: In 2020, there was an exponential increase in humanitarian needs, with 235 million people, or 1 in 33 individuals globally, requiring humanitarian assistance and protection by the year's end. A report documented an 85% increase in humanitarian air during 2020 then the year before.
See also: Church asylum
Emergency management
Humanitarian corridor
Humanitarian protection
Humanitarian Response Index
Vienna Declaration and Programme of Action
World Humanitarian Day
References:
Further reading: Götz, Norbert; Brewis, Georgina; Werther, Steffen (2020). Humanitarianism in the Modern World: The Moral Economy of Famine Relief. Cambridge: Cambridge University Press. doi:10.1017/9781108655903. ISBN 978-1-108-65590-3.
James, Eric (2008). Managing Humanitarian Relief: An Operational Guide for NGOs. Rugby: Practical Action. Practical Action/Intermediate Technology. |
mil_tactics_continued_pretraining.csv | Humanitarian aid | A report documented an 85% increase in humanitarian air during 2020 then the year before.
See also: Church asylum
Emergency management
Humanitarian corridor
Humanitarian protection
Humanitarian Response Index
Vienna Declaration and Programme of Action
World Humanitarian Day
References:
Further reading: Götz, Norbert; Brewis, Georgina; Werther, Steffen (2020). Humanitarianism in the Modern World: The Moral Economy of Famine Relief. Cambridge: Cambridge University Press. doi:10.1017/9781108655903. ISBN 978-1-108-65590-3.
James, Eric (2008). Managing Humanitarian Relief: An Operational Guide for NGOs. Rugby: Practical Action. Practical Action/Intermediate Technology. ISBN 978-1-853-39669-4.
Minear, Larry (2002). The Humanitarian Enterprise: Dilemmas and Discoveries. West Hartford, CT: Kumarian Press. ISBN 1-56549-149-1.
Waters, Tony (2001). Bureaucratizing the Good Samaritan: The Limitations of Humanitarian Relief Operations. Boulder: Westview Press. ISBN 978-0-813-36790-3.
External links: The Humanitarian Organisations Dataset (HOD): 2,505 organizations active in the humanitarian sector
"Active Learning Network for Accountability and Performance". alnap.org.
"APCN (Africa Partner Country Network)". apan.org.
"CE-DAT: The Complex Emergency Database". cedat.org.
"Centre for Safety and Development". centreforsafety.org.
"The Code of Conduct: humanitarian principles in practice". International Committee of the Red Cross. 20 September 2004.
"Doctors of the World". medecinsdumonde.org.
"EM-DAT: The International Disaster Database". emdat.be.
"The New Humanitarian". thenewhumanitarian.org.
"Protection work during armed conflict and other situations of violence: professional standards". International Committee of the Red Cross. December 2009.
"The Center for Disaster and Humanitarian Assistance Medicine". CDHAM.org.
"The ODI Humanitarian Policy Group". odi.org.uk. Archived from the original on 8 February 2006. Retrieved 7 March 2006.
"UN ReliefWeb". reliefweb.int.
Critiques of humanitarian aid: Rieff, David; Myers, Joanne J. "A Bed for the Night: Humanitarianism in Crisis". Archived from the original on 30 March 2007. |
mil_tactics_continued_pretraining.csv | ISBN (identifier) | History: The Standard Book Number (SBN) is a commercial system using nine-digit code numbers to identify books. In 1965, British bookseller and stationers WHSmith announced plans to implement a standard numbering system for its books. They hired consultants to work on their behalf, and the system was devised by Gordon Foster, emeritus professor of statistics at Trinity College Dublin. The International Organization for Standardization (ISO) Technical Committee on Documentation sought to adapt the British SBN for international use. The ISBN identification format was conceived in 1967 in the United Kingdom by David Whitaker (regarded as the "Father of the ISBN") and in 1968 in the United States by Emery Koltay (who later became director of the U.S. ISBN agency R. R. Bowker).
The 10-digit ISBN format was developed by the ISO and was published in 1970 as international standard ISO 2108. The United Kingdom continued to use the nine-digit SBN code until 1974. ISO has appointed the International ISBN Agency as the registration authority for ISBN worldwide and the ISBN Standard is developed under the control of ISO Technical Committee 46/Subcommittee 9 TC 46/SC 9. The ISO on-line facility only refers back to 1978.
An SBN may be converted to an ISBN by prefixing the digit "0". For example, the second edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has "SBN 340 01381 8", where "340" indicates the publisher, "01381" is the serial number assigned by the publisher, and "8" is the check digit. By prefixing a zero, this can be converted to ISBN 0-340-01381-8; the check digit does not need to be re-calculated. Some publishers, such as Ballantine Books, would sometimes use 12-digit SBNs where the last three digits indicated the price of the book; for example, Woodstock Handmade Houses had a 12-digit Standard Book Number of 345-24223-8-595 (valid SBN: 345-24223-8, ISBN: 0-345-24223-8), and it cost US$5.95.
Since 1 January 2007, ISBNs have contained thirteen digits, a format that is compatible with "Bookland" European Article Numbers, which have 13 digits.
The United States, with 3.9 million registered ISBNs in 2020, was by far the biggest user of the ISBN identifier in 2020, followed by the Republic of Korea (329,582), Germany (284,000), China (263,066), the UK (188,553) and Indonesia (144,793). Lifetime ISBNs registered in the United States are over 39 million as of 2020.
Overview: A separate ISBN is assigned to each edition and variation (except reprintings) of a publication. For example, an ebook, audiobook, paperback, and hardcover edition of the same book must each have a different ISBN assigned to it.: 12 The ISBN is thirteen digits long if assigned on or after 1 January 2007, and ten digits long if assigned before 2007. An International Standard Book Number consists of four parts (if it is a 10-digit ISBN) or five parts (for a 13-digit ISBN).
Section 5 of the International ISBN Agency's official user manual: 11 describes the structure of the 13-digit ISBN, as follows:
for a 13-digit ISBN, a prefix element – a GS1 prefix: so far 978 or 979 have been made available by GS1,
the registration group element (language-sharing country group, individual country or territory),
the registrant element,
the publication element, and
a checksum character or check digit.
A 13-digit ISBN can be separated into its parts (prefix element, registration group, registrant, publication and check digit), and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts (registration group, registrant, publication and check digit) of a 10-digit ISBN is also done with either hyphens or spaces. Figuring out how to correctly separate a given ISBN is complicated, because most of the parts do not use a fixed number of digits.
Issuing process: ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for that country or territory regardless of the publication language. The ranges of ISBNs assigned to any particular country are based on the publishing profile of the country concerned, and so the ranges will vary depending on the number of books and the number, type, and size of publishers that are active. Some ISBN registration agencies are based in national libraries or within ministries of culture and thus may receive direct funding from the government to support their services. In other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded.
A full directory of ISBN agencies is available on the International ISBN Agency website. A list for a few countries is given below:
Australia – Thorpe-Bowker
Brazil – The National Library of Brazil; (Up to 28 February 2020)
Brazil – Câmara Brasileira do Livro (From 1 March 2020)
Canada – English Library and Archives Canada, a government agency; French Bibliothèque et Archives nationales du Québec;
Colombia – Cámara Colombiana del Libro, an NGO
Hong Kong – Books Registration Office (BRO), under the Hong Kong Public Libraries
Iceland – Landsbókasafn (National and University Library of Iceland)
India – The Raja Rammohun Roy National Agency for ISBN (Book Promotion and Copyright Division), under Department of Higher Education, a constituent of the Ministry of Human Resource Development
Israel – The Israel Center for Libraries
Italy – EDISER srl, owned by Associazione Italiana Editori (Italian Publishers Association)
Kenya – National Library of Kenya
Latvia - Latvian ISBN Agency
Lebanon – Lebanese ISBN Agency
Maldives – The National Bureau of Classification (NBC)
Malta – The National Book Council (Maltese: Il-Kunsill Nazzjonali tal-Ktieb)
Morocco – The National Library of Morocco
New Zealand – The National Library of New Zealand
Nigeria – National Library of Nigeria
Pakistan – National Library of Pakistan
Philippines – National Library of the Philippines
South Africa – National Library of South Africa
Spain – Spanish ISBN Agency – Agencia del ISBN
Turkey – General Directorate of Libraries and Publications, a branch of the Ministry of Culture
United Kingdom and Republic of Ireland – Nielsen Book Services Ltd, part of NIQ
United States – R. R. Bowker
Registration group element: The ISBN registration group element is a 1-to-5-digit number that is valid within a single prefix element (i.e. one of 978 or 979),: 11 and can be separated between hyphens, such as "978-1-...". Registration groups have primarily been allocated within the 978 prefix element. The single-digit registration groups within the 978-prefix element are: 0 or 1 for English-speaking countries; 2 for French-speaking countries; 3 for German-speaking countries; 4 for Japan; 5 for Russian-speaking countries; and 7 for People's Republic of China. Example 5-digit registration groups are 99936 and 99980, for Bhutan. The allocated registration groups are: 0–5, 600–631, 65, 7, 80–94, 950–989, 9910–9989, and 99901–99993. Books published in rare languages typically have longer group elements.
Within the 979 prefix element, the registration group 0 is reserved for compatibility with International Standard Music Numbers (ISMNs), but such material is not actually assigned an ISBN. The registration groups within prefix element 979 that have been assigned are 8 for the United States of America, 10 for France, 11 for the Republic of Korea, and 12 for Italy.
The original 9-digit standard book number (SBN) had no registration group identifier, but prefixing a zero to a 9-digit SBN creates a valid 10-digit ISBN.
Registrant element: The national ISBN agency assigns the registrant element (cf. Category:ISBN agencies) and an accompanying series of ISBNs within that registrant element to the publisher; the publisher then allocates one of the ISBNs to each of its books. In most countries, a book publisher is not legally required to assign an ISBN, although most large bookstores only handle publications that have ISBNs assigned to them.
The International ISBN Agency maintains the details of over one million ISBN prefixes and publishers in the Global Register of Publishers. This database is freely searchable over the internet.
Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers. |
mil_tactics_continued_pretraining.csv | ISBN (identifier) | In most countries, a book publisher is not legally required to assign an ISBN, although most large bookstores only handle publications that have ISBNs assigned to them.
The International ISBN Agency maintains the details of over one million ISBN prefixes and publishers in the Global Register of Publishers. This database is freely searchable over the internet.
Publishers receive blocks of ISBNs, with larger blocks allotted to publishers expecting to need them; a small publisher may receive ISBNs of one or more digits for the registration group identifier, several digits for the registrant, and a single digit for the publication element. Once that block of ISBNs is used, the publisher may receive another block of ISBNs, with a different registrant element. Consequently, a publisher may have different allotted registrant elements. There also may be more than one registration group identifier used in a country. This might occur once all the registrant elements from a particular registration group have been allocated to publishers.
By using variable block lengths, registration agencies are able to customise the allocations of ISBNs that they make to publishers. For example, a large publisher may be given a block of ISBNs where fewer digits are allocated for the registrant element and many digits are allocated for the publication element; likewise, countries publishing many titles have few allocated digits for the registration group identifier and many for the registrant and publication elements. Here are some sample ISBN-10 codes, illustrating block length variations.
English-language pattern: English-language registration group elements are 0 and 1 (2 of more than 220 registration group elements). These two registration group elements are divided into registrant elements in a systematic pattern, which allows their length to be determined, as follows:
Check digits: A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary check bit. It consists of a single digit computed from the other digits in the number. The method for the 10-digit ISBN is an extension of that for SBNs, so the two systems are compatible; an SBN prefixed with a zero (the 10-digit ISBN) will give the same check digit as the SBN without the zero. The check digit is base eleven, and can be an integer between 0 and 9, or an 'X'. The system for 13-digit ISBNs is not compatible with SBNs and will, in general, give a different check digit from the corresponding 10-digit ISBN, so does not provide the same protection against transposition. This is because the 13-digit code was required to be compatible with the EAN format, and hence could not contain the letter 'X'.
ISBN-10 check digits: According to the 2001 edition of the International ISBN Agency's official user manual, the ISBN-10 check digit (which is the last digit of the 10-digit ISBN) must range from 0 to 10 (the symbol 'X' is used for 10), and must be such that the sum of the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11. That is, if xi is the ith digit, then x10 must be chosen such that:
For example, for an ISBN-10 of 0-306-40615-2:
Formally, using modular arithmetic, this is rendered
It is also true for ISBN-10s that the sum of all ten digits, each multiplied by its weight in ascending order from 1 to 10, is a multiple of 11. For this example:
Formally, this is rendered
The two most common errors in handling an ISBN (e.g. when typing it or writing it down) are a single altered digit or the transposition of adjacent digits. It can be proven mathematically that all pairs of valid ISBN-10s differ in at least two digits. It can also be proven that there are no pairs of valid ISBN-10s with eight identical digits and two transposed digits (these proofs are true because the ISBN is less than eleven digits long and because 11 is a prime number). The ISBN check digit method therefore ensures that it will always be possible to detect these two most common types of error, i.e., if either of these types of error has occurred, the result will never be a valid ISBN—the sum of the digits multiplied by their weights will never be a multiple of 11. However, if the error were to occur in the publishing house and remain undetected, the book would be issued with an invalid ISBN.
In contrast, it is possible for other types of error, such as two altered non-transposed digits, or three altered digits, to result in a valid ISBN (although it is still unlikely).
ISBN-10 check digit calculation: Each of the first nine digits of the 10-digit ISBN—excluding the check digit itself—is multiplied by its (integer) weight, descending from 10 to 2, and the sum of these nine products found. The value of the check digit is simply the one number between 0 and 10 which, when added to this sum, means the total is a multiple of 11.
For example, the check digit for an ISBN-10 of 0-306-40615-? is calculated as follows:
Adding 2 to 130 gives a multiple of 11 (because 132 = 12×11)—this is the only number between 0 and 10 which does so. Therefore, the check digit has to be 2, and the complete sequence is ISBN 0-306-40615-2. If the value of
x
10
{\displaystyle x_{10}}
required to satisfy this condition is 10, then an 'X' should be used.
Alternatively, modular arithmetic is convenient for calculating the check digit using modulus 11. The remainder of this sum when it is divided by 11 (i.e. its value modulo 11), is computed. This remainder plus the check digit must equal either 0 or 11. Therefore, the check digit is (11 minus the remainder of the sum of the products modulo 11) modulo 11. Taking the remainder modulo 11 a second time accounts for the possibility that the first remainder is 0. Without the second modulo operation, the calculation could result in a check digit value of 11 − 0 = 11, which is invalid. (Strictly speaking, the first "modulo 11" is not needed, but it may be considered to simplify the calculation.)
For example, the check digit for the ISBN of 0-306-40615-? is calculated as follows:
Thus the check digit is 2.
It is possible to avoid the multiplications in a software implementation by using two accumulators. Repeatedly adding t into s computes the necessary multiples:
The modular reduction can be done once at the end, as shown above (in which case s could hold a value as large as 496, for the invalid ISBN 99999-999-9-X), or s and t could be reduced by a conditional subtract after each addition.
ISBN-13 check digit calculation: Appendix 1 of the International ISBN Agency's official user manual: 33 describes how the 13-digit ISBN check digit is calculated. The ISBN-13 check digit, which is the last digit of the ISBN, must range from 0 to 9 and must be such that the sum of all the thirteen digits, each multiplied by its (integer) weight, alternating between 1 and 3, is a multiple of 10. As ISBN-13 is a subset of EAN-13, the algorithm for calculating the check digit is exactly the same for both.
Formally, using modular arithmetic, this is rendered:
The calculation of an ISBN-13 check digit begins with the first twelve digits of the 13-digit ISBN (thus excluding the check digit itself). Each digit, from left to right, is alternately multiplied by 1 or 3, then those products are summed modulo 10 to give a value ranging from 0 to 9. Subtracted from 10, that leaves a result from 1 to 10. A zero replaces a ten, so, in all cases, a single check digit results.
For example, the ISBN-13 check digit of 978-0-306-40615-? is calculated as follows:
s = 9×1 + 7×3 + 8×1 + 0×3 + 3×1 + 0×3 + 6×1 + 4×3 + 0×1 + 6×3 + 1×1 + 5×3
= 9 + 21 + 8 + 0 + 3 + 0 + 6 + 12 + 0 + 18 + 1 + 15
= 93
93 / 10 = 9 remainder 3
10 – 3 = 7
Thus, the check digit is 7, and the complete sequence is ISBN 978-0-306-40615-7.
In general, the ISBN check digit is calculated as follows.
Let
Then
This check system—similar to the UPC check digit formula—does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. |
mil_tactics_continued_pretraining.csv | ISBN (identifier) | In general, the ISBN check digit is calculated as follows.
Let
Then
This check system—similar to the UPC check digit formula—does not catch all errors of adjacent digit transposition. Specifically, if the difference between two adjacent digits is 5, the check digit will not catch their transposition. For instance, the above example allows this situation with the 6 followed by a 1. The correct order contributes 3 × 6 + 1 × 1 = 19 to the sum; while, if the digits are transposed (1 followed by a 6), the contribution of those two digits will be 3 × 1 + 1 × 6 = 9. However, 19 and 9 are congruent modulo 10, and so produce the same, final result: both ISBNs will have a check digit of 7. The ISBN-10 formula uses the prime modulus 11 which avoids this blind spot, but requires more than the digits 0–9 to express the check digit.
Additionally, if the sum of the 2nd, 4th, 6th, 8th, 10th, and 12th digits is tripled then added to the remaining digits (1st, 3rd, 5th, 7th, 9th, 11th, and 13th), the total will always be divisible by 10 (i.e., end in 0).
ISBN-10 to ISBN-13 conversion: A 10-digit ISBN is converted to a 13-digit ISBN by prepending "978" to the ISBN-10 and recalculating the final checksum digit using the ISBN-13 algorithm. The reverse process can also be performed, but not for numbers commencing with a prefix other than 978, which have no 10-digit equivalent.
Errors in usage: Publishers and libraries have varied policies about the use of the ISBN check digit. Publishers sometimes fail to check the correspondence of a book title and its ISBN before publishing it; that failure causes book identification problems for libraries, booksellers, and readers. For example, ISBN 0-590-76484-5 is shared by two books—Ninja gaiden: a novel based on the best-selling game by Tecmo (1990) and Wacky laws (1997), both published by Scholastic.
Most libraries and booksellers display the book record for an invalid ISBN issued by the publisher. The Library of Congress catalogue contains books published with invalid ISBNs, which it usually tags with the phrase "Cancelled ISBN". The International Union Library Catalog (a.k.a., WorldCat OCLC—Online Computer Library Center system) often indexes by invalid ISBNs, if the book is indexed in that way by a member library.
eISBN: Only the term "ISBN" should be used; the terms "eISBN" and "e-ISBN" have historically been sources of confusion and should be avoided. If a book exists in one or more digital (e-book) formats, each of those formats must have its own ISBN. In other words, each of the three separate EPUB, Amazon Kindle, and PDF formats of a particular book will have its own specific ISBN. They should not share the ISBN of the paper version, and there is no generic "eISBN" which encompasses all the e-book formats for a title.
EAN format used in barcodes, and upgrading: The barcodes on a book's back cover (or inside a mass-market paperback book's front cover) are EAN-13; they may have a separate barcode encoding five digits called an EAN-5 for the currency and the recommended retail price. For 10-digit ISBNs, the number "978", the Bookland "country code", is prefixed to the ISBN in the barcode data, and the check digit is recalculated according to the EAN-13 formula (modulo 10, 1× and 3× weighting on alternating digits).
Partly because of an expected shortage in certain ISBN categories, the International Organization for Standardization (ISO) decided to migrate to a 13-digit ISBN (ISBN-13). The process began on 1 January 2005 and was planned to conclude on 1 January 2007. As of 2011, all the 13-digit ISBNs began with 978. As the 978 ISBN supply is exhausted, the 979 prefix was introduced. Part of the 979 prefix is reserved for use with the Musicland code for musical scores with an ISMN. The 10-digit ISMN codes differed visually as they began with an "M" letter; the bar code represents the "M" as a zero, and for checksum purposes it counted as a 3. All ISMNs are now thirteen digits commencing 979-0; 979-1 to 979-9 will be used by ISBN.
Publisher identification code numbers are unlikely to be the same in the 978 and 979 ISBNs, likewise, there is no guarantee that language area code numbers will be the same. Moreover, the 10-digit ISBN check digit generally is not the same as the 13-digit ISBN check digit. Because the GTIN-13 is part of the Global Trade Item Number (GTIN) system (that includes the GTIN-14, the GTIN-12, and the GTIN-8), the 13-digit ISBN falls within the 14-digit data field range.
Barcode format compatibility is maintained, because (aside from the group breaks) the ISBN-13 barcode format is identical to the EAN barcode format of existing 10-digit ISBNs. So, migration to an EAN-based system allows booksellers the use of a single numbering system for both books and non-book products that is compatible with existing ISBN based data, with only minimal changes to information technology systems. Hence, many booksellers (e.g., Barnes & Noble) migrated to EAN barcodes as early as March 2005. Although many American and Canadian booksellers were able to read EAN-13 barcodes before 2005, most general retailers could not read them. The upgrading of the UPC barcode system to full EAN-13, in 2005, eased migration to the ISBN in North America.
See also: ASIN (Amazon Standard Identification Number)
BICI (Book Item and Component Identifier)
Book sources search – a Wikipedia resource that allows search by ISBNs
CODEN (serial publication identifier currently used by libraries; replaced by the ISSN for new works)
DOI (Digital Object Identifier)
ESTC (English Short Title Catalogue)
ISAN (International Standard Audiovisual Number)
ISRC (International Standard Recording Code)
ISTC (International Standard Text Code)
ISWC (International Standard Musical Work Code)
ISSN (International Standard Serial Number)
ISWN (International Standard Wine Number)
LCCN (Library of Congress Control Number)
License number (East German books) (Book identification system used between 1951 and 1990 in the former GDR)
List of group-0 ISBN publisher codes
List of group-1 ISBN publisher codes
List of ISBN registration groups
SICI (Serial Item and Contribution Identifier)
VD 16 (Verzeichnis der im deutschen Sprachbereich erschienenen Drucke des 16. Jahrhunderts, "Bibliography of Books Printed in the German Speaking Countries of the Sixteenth Century")
VD 17 (Verzeichnis der im deutschen Sprachraum erschienenen Drucke des 17. Jahrhunderts, "Bibliography of Books Printed in the German Speaking Countries of the Seventeenth Century")
Explanatory notes:
References:
External links:
ISO 2108:2017 – International Standard Book Number (ISBN)
International ISBN Agency – coordinates and supervises the worldwide use of the ISBN system
Numerical List of Group Identifiers – List of language/region prefixes
Free conversion tool: ISBN-10 to ISBN-13 & ISBN-13 to ISBN-10 from the ISBN agency. Also shows correct hyphenation & verifies if ISBNs are valid or not.
"Guidelines for the Implementation of 13-Digit ISBNs" (PDF). Archived from the original (PDF) on 12 September 2004.
RFC 3187 – Using International Standard Book Numbers as Uniform Resource Names (URN)
Worldwide Auto-Converter at Library of Congress |
mil_tactics_continued_pretraining.csv | Imagery intelligence | History:
Origins: Although aerial photography was first used extensively in the First World War, it was only in the Second World War that specialized imagery intelligence operations were initiated. High quality images were made possible with a series of innovations in the decade leading up to the war. In 1928, the RAF developed an electric heating system for the aerial camera. This allowed reconnaissance aircraft to take pictures from very high altitudes without the camera parts freezing.
In 1939, Sidney Cotton and Flying Officer Maurice Longbottom of the RAF suggested that airborne reconnaissance may be a task better suited to fast, small aircraft which would use their speed and high service ceiling to avoid detection and interception. They proposed the use of Spitfires with their armament and radios removed and replaced with extra fuel and cameras. This led to the development of the Spitfire PR variants. These planes had a maximum speed of 396 mph at 30,000 feet with their armaments removed, and were used for photo-reconnaissance missions. The aircraft were fitted with five cameras which were heated to ensure good results.
The systematic collection and interpretation of the huge amounts of aerial reconnaissance intelligence data soon became imperative. Beginning in 1941, RAF Medmenham was the main interpretation centre for photographic reconnaissance operations in the European and Mediterranean theatres. The Central Interpretation Unit (CIU) was later amalgamated with the Bomber Command Damage Assessment Section and the Night Photographic Interpretation Section of No 3 Photographic Reconnaissance Unit, RAF Oakington, in 1942.
During 1942 and 1943, the CIU gradually expanded and was involved in the planning stages of practically every operation of the war, and in every aspect of intelligence. In 1945, daily intake of material averaged 25,000 negatives and 60,000 prints. Thirty-six million prints were made during the war. By VE-day, the print library, which documented and stored worldwide cover, held 5,000,000 prints from which 40,000 reports had been produced.
American personnel had for some time formed an increasing part of the CIU and on 1 May 1944 this was finally recognized by changing the title of the unit to the Allied Central Interpretation Unit (ACIU). There were then over 1,700 personnel on the unit's strength. A large number of photographic interpreters were recruited from the Hollywood Film Studios including Xavier Atencio. Two renowned archaeologists also worked there as interpreters: Dorothy Garrod, the first woman to hold an Oxbridge Chair, and Glyn Daniel, who went on to gain popular acclaim as the host of the television game show Animal, Vegetable or Mineral?.
Sidney Cotton's aerial photographs were far ahead of their time. Together with other members of his reconnaissance squadron, he pioneered the technique of high-altitude, high-speed photography that was instrumental in revealing the locations of many crucial military and intelligence targets. Cotton also worked on ideas such as a prototype specialist reconnaissance aircraft and further refinements of photographic equipment. At its peak, British reconnaissance flights yielded 50,000 images per day to interpret.
Of particular significance in the success of the work of Medmenham was the use of stereoscopic images, using a between plate overlap of exactly 60%. Despite initial scepticism about the possibility of the German rocket technology, major operations, including the 1943 offensives against the V-2 rocket development plant at Peenemünde, were made possible by painstaking work carried out at Medmenham. Later offensives were also made against potential launch sites at Wizernes and 96 other launch sites in northern France.
It is claimed that Medmanham's greatest operational success was "Operation Crossbow" which, from 23 December 1943, destroyed the V-1 infrastructure in northern France. According to R.V. Jones, photographs were used to establish the size and the characteristic launching mechanisms for both the V-1 flying bomb and the V-2 rocket.
Post war spyplanes: Immediately after World War II, long range aerial reconnaissance was taken up by adapted jet bombers – such as the English Electric Canberra, and its American development, the Martin B-57 – capable of flying higher or faster than the enemy.
Highly specialized and secretive strategic reconnaissance aircraft, or spy planes, such as the Lockheed U-2 and its successor, the SR-71 Blackbird were developed by the United States. Flying these aircraft became an exceptionally demanding task, as much because of the aircraft's extreme speed and altitude as the risk of being captured as spies. As a result, the crews of these aircraft were invariably specially selected and trained.
There are claims that the US constructed a hypersonic reconnaissance aircraft, dubbed the Aurora, in the late 1980s to replace the Blackbird. Since the early 1960s, in the United States aerial and satellite reconnaissance has been coordinated by the National Reconnaissance Office.
Early use of satellites: Early photographic reconnaissance satellites used photographic film, which was exposed on-orbit and returned to earth for developing. These satellites remained in orbit for days, weeks, or months before ejecting their film-return vehicles, called "buckets." Between 1959 and 1984 the U.S. launched around 200 such satellites under the codenames CORONA and GAMBIT, with ultimate photographic resolution (ground-resolution distance) better than 4 inches (0.10 m). The first successful mission concluded on 1960-08-19 with the mid-air recovery by a C-119 of film from the Corona mission code-named Discoverer 14. This was the first successful recovery of film from an orbiting satellite and the first aerial recovery of an object returning from Earth orbit. Because of a tradeoff between area covered and ground resolution, not all reconnaissance satellites have been designed for high resolution; the KH-5-ARGON program had a ground resolution of 140 meters and was intended for mapmaking.
Between 1961 and 1994 the USSR launched perhaps 500 Zenit film-return satellites, which returned both the film and the camera to earth in a pressurized capsule.
The U.S. KH-11 series of satellites, first launched in 1976, was made by Lockheed, the same contractor who built the Hubble Space Telescope. HST has a 2.4 metre telescope mirror and is believed to have had a similar appearance to the KH-11 satellites. These satellites used charge-coupled devices, predecessors to modern digital cameras, rather than film. Russian reconnaissance satellites with comparable capabilities are named Resurs DK and Persona.
Aircraft: Low- and high-flying planes have been used all through the last century to gather intelligence about the enemy. U.S. high-flying reconnaissance planes include the Lockheed U-2, and the much faster SR-71 Blackbird, (retired in 1998). One advantage planes have over satellites is that planes can usually produce more detailed photographs and can be placed over the target more quickly, more often, and more cheaply, but planes also have the disadvantage of possibly being intercepted by aircraft or missiles such as in the 1960 U-2 incident.
Unmanned aerial vehicles have been developed for imagery and signals intelligence. These drones are a force multiplier by giving the battlefield commander an "eye in the sky" without risking a pilot.
Satellite: Though the resolution of satellite photographs, which must be taken from distances of hundreds of kilometers, is usually poorer than photographs taken by air, satellites offer the possibility of coverage for much of the earth, including hostile territory, without exposing human pilots to the risk of being shot down.
There have been hundreds of reconnaissance satellites launched by dozens of nations since the first years of space exploration. Satellites for imaging intelligence were usually placed in high-inclination low Earth orbits, sometimes in Sun-synchronous orbits. Since the film-return missions were usually short, they could indulge in orbits with low perigees, in the range of 100–200 km, but the more recent CCD-based satellites have been launched into higher orbits, 250–300 km perigee, allowing each to remain in orbit for several years. While the exact resolution and other details of modern spy satellites are classified, some idea of the trade-offs available can be made using simple physics. The formula for the highest possible resolution of an optical system with a circular aperture is given by the Rayleigh criterion:
sin
θ
=
1.22
λ
D
.
{\displaystyle \sin \theta =1.22{\frac {\lambda }{D}}.}
Using
sin
θ
=
size
distance
,
{\displaystyle \sin \theta ={\frac {\text{size}}{\text{distance}}},}
we can get
size
=
1.22
λ
D
distance
,
{\displaystyle {\text{size}}=1.22{\frac {\lambda }{D}}{\text{distance}},}
where θ is the angular resolution, λ is the wavelength of light, and D is the diameter of the lens or mirror. |
mil_tactics_continued_pretraining.csv | Imagery intelligence | {\displaystyle \sin \theta =1.22{\frac {\lambda }{D}}.}
Using
sin
θ
=
size
distance
,
{\displaystyle \sin \theta ={\frac {\text{size}}{\text{distance}}},}
we can get
size
=
1.22
λ
D
distance
,
{\displaystyle {\text{size}}=1.22{\frac {\lambda }{D}}{\text{distance}},}
where θ is the angular resolution, λ is the wavelength of light, and D is the diameter of the lens or mirror. Were the Hubble Space Telescope, with a 2.4 m telescope, designed for photographing Earth, it would be diffraction-limited to resolutions greater than 16 cm (6 inches) for green light (
λ
≈
550
{\displaystyle \lambda \approx 550}
nm) at its orbital altitude of 590 km. This means that it would be impossible to take photographs showing objects smaller than 16 cm with such a telescope at such an altitude. Modern U.S. IMINT satellites are believed to have around 10 cm resolution; contrary to references in popular culture, this is sufficient to detect any type of vehicle, but not to read the headlines of a newspaper.
The primary purpose of most spy satellites is to monitor visible ground activity. While resolution and clarity of images has improved greatly over the years, this role has remained essentially the same. Some other uses of satellite imaging have been to produce detailed 3D maps for use in operations and missile guidance systems, and to monitor normally invisible information such as the growth levels of a country's crops or the heat given off by certain facilities. Some of the multi-spectral sensors, such as thermal measurement, are more electro-optical MASINT than true IMINT platforms.
To counter the threat posed by these "eyes in the sky," the United States, USSR/Russia, China and India have developed systems for destroying enemy spy satellites (either with the use of another 'killer satellite', or with some sort of Earth- or air-launched missile).
Since 1985, commercial vendors of satellite imagery have entered the market, beginning with the French SPOT satellites, which had resolutions between 5 and 20 metres. Recent high-resolution (4–0.5 metre) private imaging satellites include TerraSAR-X, IKONOS, Orbview, QuickBird and Worldview-1, allowing any country (or any business for that matter) to buy access to satellite images.
Analytical Methodology: The value of IMINT reports are determined on a balance between the timeliness and robustness of the intelligence product. As such, the fidelity of intelligence that may be gleaned from imagery analysis is a traditionally perceived by intelligence professionals as a function of the amount of time an imagery analyst (IA) has to exploit a given image or set of imagery. As such, the United States Army field manual breaks IMINT analysis into three distinct phases, based upon the amount of time expended in exploiting any given image.
First phase: First phase imagery analysis is deemed "time-dominant." This means that given imagery must be rapidly exploited in order to satisfy an immediate requirement for imagery-sourced intelligence from which a leader may make an educated political and/or military decision. Due to the need to produce near-real time intelligence assessments based upon collected imagery, first phase imagery analysis is rarely compared to collateral intelligence.
Second phase: Second phase imagery analysis is centered on the further exploitation of recently collected imagery to support short- to mid-term decision-making. Like first phase imagery analysis, second phase imagery analysis is generally catalyzed by a local commander's Priority Intelligence Requirements, at least in the context of a military operational setting. Whereas first phase imagery analysis may depend on the exploitation of a relatively small repository of imagery, or even a single image, second phase imagery analysis generally mandates a review of a chronological set of imagery over time, so as to establish a temporal understanding of objects and/or activities of interest.
Third phase: Third phase imagery analysis is generally conducted in order to satisfy strategic intelligence questions or to otherwise explore existing data in the search of "discovery intelligence." Third phase imagery analysis hinges on the use of a large repository of historical imagery as well as access to a variety of sources of information. Third phase imagery analysis incorporates supporting information and intelligence from other intelligence gathering disciplines and is, therefore, generally conducted in support of a multi-source intelligence team. The exploitation of imagery at this level of analysis is typically conducted with the intention of producing Geospatial Intelligence (GEOINT).
See also: Arthur C. Lundahl
Canadian Forces Joint Imagery Centre (Canadian GEOINT organization)
Defence Imagery and Geospatial Organisation (DIGO) (Australian GEOINT organization)
Defence Intelligence Fusion Centre (British GEOINT organization)
Dino A. Brugioni
First images of Earth from space
GIS in GEOINT
Geospatial intelligence (GEOINT)
National Collection of Aerial Photography (NCAP)
National Geospatial-Intelligence Agency (American GEOINT organization)
RAF Intelligence: Royal Air Force Intelligence Branch
Remote Sensing
Notes:
Further reading: Beitler, Stephen S. "Imagery Intelligence." in The Military Intelligence Community (Routledge, 2019) pp. 71–86.
Caddell Jr, Joseph W. "Corona over Cuba: The Missile Crisis and the Early Limitations of Satellite Imagery Intelligence." Intelligence and National Security 31.3 (2016): 416-438. online
Davies, Philip H. J. "Imagery in the UK: Britain's troubled imagery intelligence architecture." Review of International Studies 35.4 (2009): 957-969. online
Diamond, John M. "Re-examining problems and prospects in US imagery intelligence." International Journal of Intelligence and CounterIntelligence 14.1 (2001): 1-24.
Dupré, Robert E. "Guide to imagery intelligence." Intelligencer: Journal Of US Intelligence Studies 18.2 (2011): 61-64. online
Firschein, Oscar, and Thomas M. Strat, eds. RADIUS: Image understanding for imagery intelligence (Morgan Kaufmann, 1997).
Jenkins, Peter. Covert Imagery, ISBN 978 09535378 53, Intel Publishing UK.
McAuley, Cheryl D. Strategic implications of imagery intelligence (Army War College, 2005) online.
Quiñones, Maya. William Gould, and Carlos D. Rodríguez-Pedraza. United States Department of Agriculture Geospatial Data Availability for Haiti (February 2007) (Study on availability of commercial imagery in 2007 which summarizes collection systems and data products.)
Ułanowicz, Leszek, and Ryszard Sabak. "Unmanned aerial vehicles supporting imagery intelligence using the structured light technology." Archives of Transport 58 (2021). online
External links: Introduction to Imagery Intelligence via globalsecurity
Australian Defence Satellite Communications Station, Geraldton
Joint Australian-US intelligence facility - Pine Gap |
mil_tactics_continued_pretraining.csv | Immelmann turn | Historical combat maneuver: In World War I aerial combat, an Immelmann turn was a maneuver used after an attack on another aircraft to reposition the attacking aircraft for another attack.
After making a high-speed diving attack on an enemy, the attacker would then climb back up past the enemy aircraft, and just short of the stall, apply full rudder to yaw his aircraft around. This put his aircraft facing down at the enemy aircraft, making another high-speed diving pass possible. This is a difficult maneuver to perform properly, as it involves precise control of the aircraft at low speed. With practice and proper use of all of the fighter's controls, the maneuver could be used to reposition the attacking aircraft to dive back down in any direction desired.
In modern aerobatics, this maneuver, if executed pre-stall with a non-zero turning radius at the top of the climb, is known as a wingover. If the rudder turn is executed right at the initiation of the stall, the resulting yaw occurs around a point within the aircraft's wingspan and the maneuver is known as a stall turn or hammerhead.
Aerobatic maneuver: The aerobatic Immelmann turn derives its name from the dogfighting tactic, but is a different maneuver than the original, now known as a "wingover" or "hammerhead".
In modern aerobatics, an Immelmann turn (also known as a roll-off-the-top, or simply an Immelmann) is an aerobatic maneuver. Essentially, it comprises an ascending half-loop followed by a half-roll, resulting in level flight in the opposite direction at a higher altitude. It is the opposite of a Split S, which involves a half-roll followed by a half-loop, resulting in level flight in the opposite direction at a lower altitude.
To successfully execute a roll-off-the-top turn, the pilot accelerates to sufficient airspeed to perform a loop in the aircraft. The pilot then pulls the aircraft into a climb, and continues to pull back on the controls as the aircraft climbs. Rudder and ailerons must be used to keep the half-loop straight when viewed from the ground. As the aircraft passes over the point at which the climb was commenced, it should be inverted and a half loop will have been executed. Sufficient airspeed must be maintained to recover without losing altitude, and at the top of the loop the pilot then executes a half-roll to regain normal upright aircraft orientation. As a result, the aircraft is now at a higher altitude and has changed course 180 degrees.
Not all aircraft are capable of (or certified for) this maneuver, due to insufficient engine power, or engine design that precludes inverted flying. (This usually applies to piston engines that have an open-oil pan. However, when properly flown, the aircraft will maintain positive G throughout the maneuver, eliminating the requirement for an inverted oil system.) In fact, a few early aircraft had sufficiently precise roll control to have performed this maneuver properly.
See also: Chandelle
Cuban Eight
The Scissors
Split S
Thach Weave
Citations:
General bibliography: McMinnies, William Gordon; Anderson, Henry Graeme (1997) [1918]. Practical Flying: Complete Course of Flying Instruction. London: Temple Press. ISBN 978-1145564930. OCLC 1380075.
Wheeler, Allan H. (1963). Building Aeroplanes for "Those Magnificent Men". London: Foulis. ASIN B000WL3QQW.
Wood, Alan C.; Sutton, Alan (2016). Military Aviation of the First World War. Fonthill Media Press. ISBN 978-1781554227.
External links: The "Immelman" Turn |
mil_tactics_continued_pretraining.csv | Industrial warfare | Total war: One of the main features of industrial warfare is the concept of "total war". The term was coined during World War I by Erich Ludendorff (and again in his 1935 book Total War), which called for the complete mobilization and subordination of all resources, including policy and social systems, to the German war effort. It has also come to mean waging warfare with absolute ruthlessness, and its most identifiable legacy today has been the reintroduction of civilians and civilian infrastructure as targets in destroying the enemy's ability to engage in war.
There are several reasons for the rise of total warfare in the 19th century. The main one is industrialization. As countries' capital and natural resources grew, it became clear that some forms of warfare demanded more resources than others. Consequently, the greater cost of warfare became evident. An industrialized nation could distinguish and then choose the intensity of warfare that it wished to engage in.
Additionally, warfare was becoming more mechanized and required greater infrastructure. Combatants could no longer live off the land, but required an extensive support network of people behind the lines to keep them fed and armed. This required the mobilization of the home front. Modern concepts like propaganda were first used to boost production and maintain morale, while rationing took place to provide more war material.
The earliest modern example of total war was the American Civil War. Union generals Ulysses S. Grant and William Tecumseh Sherman were convinced that, if the North was to be victorious, the Confederacy's strategic, economic, and psychological ability to wage war had to be definitively crushed. They believed that to break the backbone of the South, the North had to employ scorched earth tactics, or as Sherman called it, "Hard War". Sherman's advance through Georgia and the Carolinas was characterized by the widespread destruction of civilian supplies and infrastructure. In contrast to later conflicts, the damage done by Sherman was almost entirely limited to property destruction. In Georgia alone, Sherman claimed he and his men had caused $100,000,000 in damages.
Conscription: Conscription is the compulsory enrollment of civilians into military service. Conscription allowed the French Republic to form La Grande Armée, what Napoleon Bonaparte called "the nation in arms", which successfully battled smaller, professional European armies.
Conscription, particularly when the conscripts are being sent to foreign wars that do not directly affect the security of the nation, has historically been highly politically contentious in democracies. For instance, during World War I, bitter political disputes broke out in Canada (see Conscription Crisis of 1917), Newfoundland, Australia and New Zealand (see Compulsory Military Training) over conscription. Canada also had a political dispute over conscription during World War II (see Conscription Crisis of 1944). Both South Africa and Australia put limits on where conscripts could fight in WWII. Similarly, mass protests against conscription to fight the Vietnam War occurred in several countries in the late 1960s.
In developed nations, the increasing emphasis on technological firepower and better-trained fighting forces, the sheer unlikelihood of a conventional military assault on most developed nations, as well as memories of widespread controversies over the Vietnam War, make mass conscription less likely, but still possible, in the future.
Russia, as well as many smaller nations such as Switzerland, retain mainly conscript armies.
Transportation:
Land: Prior to the invention of the motorized transport, combatants were transported by wagons, horses and by marching. With the advent of locomotives, large groups of combatants, supplies, and equipment were able to be transported faster and in larger numbers. To counter this, an opposing force would destroy rail lines to hinder their enemies' movements. General Sherman's men during the American Civil War, would destroy tracks, heat the rails, and wrap them around trees.
The mass transportation of combatants was further revolutionized with the advent of the internal combustion engine and the automobile. Combined with the widespread use of the machine gun, the horse, after millennia of use, was finally supplanted in its war time role. During both WWI and WWII, trucks were used to carry combatants and materiel, while cars and jeeps were used to scout enemy positions.
The mechanization of infantry occurred during WWII. The tank, a product of World War I independently invented by the British and French to break through trenches while withstanding machine gun fire, while discounted by many, came into its own. Tanks evolved from thin-skinned, lumbering vehicles into fast, powerful war machines of various types that dominated the battlefield and allowed the Germans to conquer most of Europe. As a result of the tank's evolution, a number of armored transport vehicles appeared, such as armoured personnel carriers and amphibious vehicles.
After the war ended, armored transports continued to evolve. The armored car and train declined in use, largely becoming relegated to military and civilian use as transportation for VIPs. Infantry fighting vehicles rose to prominence with the creation of the Soviet BMP-1. IFVs are a more combat capable version of the APC, with heavier armaments (such as autocannons), while still retaining the ability to transport combatants into and out of battles.
Sea: Sealift is a military logistics term referring to the use of cargo ships for the deployment of military assets, such as weaponry, military personnel, and materiel supplies. It complements other means of transport, such as strategic airlifters, in order to enhance a state's ability to project power. A state's sealift capabilities may include civilian-operated ships that normally operate by contract, but which can be chartered or commandeered during times of military necessity to supplement government-owned naval fleets.
During WWI, the United States bought, borrowed or commandeered vessels of various types, ranging from pleasure craft to ocean liners to transport the American Expeditionary Force to Europe. Many of these ships were scrapped, sold or returned to their owners after the war.
Air: There are two different kinds of airlifts in warfare, a strategic airlift and a tactical airlift. A strategic airlift is the use transporting of weapons, supplies and personnel over long distances (from a base in one country to a base in another country for example) using large cargo aircraft. This contrasts with tactical airlifts, which involves transporting the same above items within a theater of operations. This usually involves cargo planes with shorter ranges and slower speeds, but higher maneuverability.
Communications: Cryptography
Homing pigeon/War pigeon
Joint Army/Navy Phonetic Alphabet
Message precedence
Semaphore (communication)
Signal Corps
Smoke signal
Telegraphy
Equipment: Aldis lamp
International maritime signal flags
Land warfare: Land warfare, as the name implies, takes place on land. The most common type of warfare, it can encompass several modes and locales, including urban, arctic, and mountain warfare.
The early part of the 19th century from 1815 to 1848 saw a long period of peace in Europe, accompanied by extraordinary industrial expansion. The industrial age brought about various technological advancements, each with their own implication. Land warfare moved from visual-range and semi person-to-person combat of the previous era, to indiscriminate and impersonal, "beyond visual range" warfare. The Crimean War (1853–1856) saw the introduction of trench warfare, long-range artillery, railroads, the telegraph, and the rifle. The mechanized mass-destruction of enemy combatants grew ever more deadly. In WWI (1914–1918) machine-guns, barbed wire, chemical weapons, and land-mines entered the battlefield. The deadly stalemated trench-warfare stage was finally passed with the advent of the modern armored tank late in WWI.
One major trend involved the transition away massed infantry fire and human waves to more refined tactics. This became possible with the superseding of earlier weapons like the highly inaccurate musket.
Technological advances: Rifling refers to the act of adding spiral grooves to the inside of the barrel of a firearm. The grooves would cause a projectile to spin as it traveled down the barrel, improving range and accuracy. Once rifling became easier and practical, a new type of firearm was introduced, the rifle. It gave combatants the ability to specifically target an enemy combatant, rather than have large numbers of combatants fire in a general direction. It effectively broke up groups of combatants into smaller more maneuverable units.
Artillery are large guns designed to fire large projectiles a great distance. Early artillery pieces were large and cumbersome with slow rates of fire. This reduced their use to sieges, by both defenders and attackers. With the advent of the industrial age and various technological advancements, lighter, yet powerful and accurate artillery pieces were produced. This gave rise to field artillery which were used on a tactical level to support troops.
Machine guns are fully automatic guns. In this era of warfare they only existed as mounted support weapons, as automatic firearms were not yet developed. Early machine guns as invented by Richard Gatling, were hand cranked but evolved into truly automatic machine guns by Maxim at the end of the era. Machine guns were valued for their ability to smash infantry formations, especially attacking enemy formations when they were dense. This, along with effective field artillery, changed tactics drastically.
Static defense: Static defenses evolved from the use of permanent fortifications that were direct descendants of medieval castles. As artillery improved in destructive power and penetrative ability, more modern fortifications were developed, using first thicker layers of stone, then concrete and steel. |
mil_tactics_continued_pretraining.csv | Industrial warfare | With the advent of the industrial age and various technological advancements, lighter, yet powerful and accurate artillery pieces were produced. This gave rise to field artillery which were used on a tactical level to support troops.
Machine guns are fully automatic guns. In this era of warfare they only existed as mounted support weapons, as automatic firearms were not yet developed. Early machine guns as invented by Richard Gatling, were hand cranked but evolved into truly automatic machine guns by Maxim at the end of the era. Machine guns were valued for their ability to smash infantry formations, especially attacking enemy formations when they were dense. This, along with effective field artillery, changed tactics drastically.
Static defense: Static defenses evolved from the use of permanent fortifications that were direct descendants of medieval castles. As artillery improved in destructive power and penetrative ability, more modern fortifications were developed, using first thicker layers of stone, then concrete and steel. After naval artillery developed the turret – a moving cannon platform – land fortifications started to use this method as well. Between the World Wars, France built an "impregnable" underground steel and concrete fortification that ran the length of the German-French border. This Maginot Line failed to stop German tanks in 1940: they bypassed the fortifications by invading through neighboring Belgium.
Temporary fortifications: As artillery and rifles allowed the killing of enemy personnel at a longer effective range, soldiers started to dig into temporary fortifications. These included massive trenches as used in WWI, and individual soldier-sized "fox holes" which became more common in WWII.
Maneuver warfare: Maneuver had existed throughout military history – from soldiers marching on the field to using horses in cavalry formations. It was not until the advent of mechanized transport over unprepared terrain, such as fields and deserts, using tanks and armored vehicles, that "maneuver warfare" became feasible. First used by the German army in Poland and France in WWII, Blitzkrieg or "lightning war" saw whole armies moved rapidly on tracked and armored fighting vehicles. During the war airborne movement was used, with soldiers dropped to the battlefield by parachute by both the Germans and the Allies. After WWII, developments in helicopters brought a more practical way to transport troops by air.
Armoured warfare
Blitzkrieg
Deep operations
Naval warfare:
Ironclads and Dreadnoughts: The period after the Napoleonic Wars was one of intensive experimentation with new technology; steam power for ships appeared in the 1810s, improved metallurgy and machining technique produced larger and deadlier guns, and the development of explosive shells, capable of demolishing a wooden ship at a single blow, in turn required the addition of iron armor, which led to ironclads.
The famous battle of the CSS Virginia and USS Monitor in the American Civil War was the duel of ironclads that symbolized the changing times. Although the battle was inconclusive, nations around the world subsequently raced to convert their fleets to iron, as ironclads had shown themselves to be clearly superior to wooden ships in their ability to withstand enemy fire.
In the late 19th century, naval warfare was revolutionized by Alfred Thayer Mahan's book The Influence of Sea Power upon History. Mahan argued that in the Anglo-French wars of the 18th and 19th centuries, domination of the sea was the deciding factor in the outcome, and therefore control of seaborne commerce was critical to military victory. Mahan argued that the best way to achieve naval domination was through large fleets of concentrated capital ships, as opposed to commerce raiders. His books were closely studied in all the Great Powers, influencing their naval arms race in the years prior to WWI.
As the century came to a close, the familiar modern battleship began to emerge; a steel-armored ship, entirely dependent on steam turbines, and sporting a number of large shell guns mounted in turrets arranged along the centerline of the main deck. The ultimate design was reached in 1906 with HMS Dreadnought, which entirely dispensed with smaller guns, her main guns being sufficient to sink any existing ship of the time.
The Russo-Japanese War and particularly the Battle of Tsushima in 1905 was the first test of the new concepts, resulting in a stunning Japanese victory and the destruction of dozens of Russian ships. World War I pitted the old Royal Navy against the new navy of Imperial Germany, culminating in the 1916 Battle of Jutland. Following the war, many nations agreed to limit the size of their fleets in the Washington Naval Treaty and scrapped many of their battleships and cruisers.
Growing tensions of the 1930s restarted the building programs, with even larger ships than before: the Japanese battleship Yamato, launched in 1941, displaced 72,000 tons and mounted 18-inch (46 cm) guns. This marked the climax of "big gun" warfare, as aircraft would gradually play a larger role in warfare. By the 1960s, battleships had all-but vanished from the fleets of the world.
Aircraft carriers: Between the world wars, the first aircraft carriers appeared, initially as a way to circumvent the tonnage limits of the Washington Naval Treaty (many of the first carriers were converted battlecruisers). Though several ships had previously been designed to launch aircraft, the first true "flat-top" carrier was HMS Argus, launched in December 1917.
By the start of WWII, aircraft carriers typically carried three types of aircraft: torpedo bombers, which could also be used for conventional horizontal bombing and reconnaissance; dive bombers, also used for reconnaissance; and fighters for fleet defence and bomber escort duties. Because of the restricted space on aircraft carriers, these aircraft were almost always small, single-engined warplanes. The first true demonstration of naval air power was the victory of the Royal Navy at the Battle of Taranto in 1940, which set the stage for Japan's much larger and more famous attack on Pearl Harbor the following year.
Two days after Pearl Harbor, the sinking of HMS Prince of Wales and HMS Repulse, marked the beginning of the end for the battleship era. Following WWII, aircraft carriers continued to remain key to navies throughout the latter 20th century, moving in the 1950s to jets launched from Supercarriers, behemoths which could displace as much as 100,000 tons.
Submarines: Just as important was the development of submarines to travel underneath the sea, at first for short dives, then later to be able to spend weeks or months underwater powered by a nuclear reactor. The first successful submarine attack in wartime was in 1864 by the Confederate submarine H.L. Hunley which sank the frigate USS Housatonic.
In both World Wars, submarines primarily exerted their power by sinking merchant ships using torpedoes, in addition to attacks on warships. All nations practiced unrestricted submarine warfare in which submarines sank merchant ships without warning, but the only successful campaign during this period was America's submarine war against Japan during the Pacific War. In the 1950s the Cold War inspired the development of ballistic missile submarines, each one loaded with dozens of nuclear-armed missiles and with orders to launch them from sea should the other nation attack.
Aerial warfare: The first use of airplanes in war was the Italo-Turkish War of 1911, when the Italians carried out several reconnaissance and bombing missions. During WWI both sides made use of balloons and airplanes for reconnaissance and directing artillery fire. To prevent enemy reconnaissance, some airplane pilots began attacking other airplanes and balloons, first with small arms carried in the cockpit, and later with machine guns mounted on the aircraft. Both sides also made use of aircraft for bombing, strafing and dropping of propaganda leaflets.
The German air force carried out the first terror bombing raids, using Zeppelins to drop bombs on Britain. By the end of the war airplanes had become specialised into bombers, fighters, and surveillance aircraft. Most of these airplanes were biplanes with wooden frames, canvas skins, wire rigging and air-cooled engines.
Between 1918 and 1939, aircraft technology developed very rapidly. By 1939 military biplanes were in the process of being replaced with metal framed monoplanes, often with stressed skins and liquid cooled engines. Top speeds had tripled; altitudes doubled (and oxygen masks become commonplace); ranges and payloads of bombers increased enormously.
Some theorists, most famously Hugh Trenchard and Giulio Douhet, believed that aircraft would become the dominant military arm in the future, and argued that future wars would be won entirely by the destruction of the enemy's military and industrial capability from the air. This concept was called strategic bombing. Douhet also argued in The Command of the Air (1921) that future military leaders could avoid falling into bloody World War I-style trench stalemates by using aviation to strike past the enemy's forces directly at their vulnerable civilian population, which Douhet believed would cause these populations to rise up in revolt to stop the bombing.
Others, such as Billy Mitchell, saw the potential of air power to neutralize the striking power of naval surface fleets. Mitchell himself proved the vulnerability of capital ships to aircraft was finally in 1921 when he commanded a squadron of bombers that sank the ex-German battleship SMS Ostfriesland with aerial bombs. (See Industrial warfare#Naval warfare)
During WWII, there was a debate between strategic bombing and tactical bombing. |
mil_tactics_continued_pretraining.csv | Industrial warfare | This concept was called strategic bombing. Douhet also argued in The Command of the Air (1921) that future military leaders could avoid falling into bloody World War I-style trench stalemates by using aviation to strike past the enemy's forces directly at their vulnerable civilian population, which Douhet believed would cause these populations to rise up in revolt to stop the bombing.
Others, such as Billy Mitchell, saw the potential of air power to neutralize the striking power of naval surface fleets. Mitchell himself proved the vulnerability of capital ships to aircraft was finally in 1921 when he commanded a squadron of bombers that sank the ex-German battleship SMS Ostfriesland with aerial bombs. (See Industrial warfare#Naval warfare)
During WWII, there was a debate between strategic bombing and tactical bombing. Strategic bombing focused on targets such as factories, railroads, oil refineries, and heavily populated areas such as cities and towns, and required heavy four-engine bombers carrying large payloads of ordnance or a single heavy four-engine bomber carrying a nuclear weapon flying deep into enemy territory. Tactical bombing focused on concentration of combatants, command and control centers, airfields, and ammunition dumps, and required attack aircraft, dive bombers, and fighter bombers that could fly low over the battlefield.
In the early years of WWII, the German Luftwaffe focused on tactical bombing, using large numbers of Ju 87 Stukas as "flying artillery" for land offensives. Artillery was slow and required time to set up a firing position, whereas aircraft were better able keep up with the fast advances of the German panzer columns. Close air support greatly assisted in the successes of the German Army in the Battle of France. It was also important in amphibious warfare, where aircraft carriers could provide support for soldiers landing on the beaches.
Strategic bombing, by contrast, was unlike anything the world has seen before or since. In 1940, the Germans attempted to force Britain to surrender through attacks on its airfields and factories, and then on its cities in The Blitz in what became the Battle of Britain, the first major battle whose outcome was determined primarily in the air. The campaigns conducted in Europe and Asia could involve thousands of aircraft dropping tens of thousands of tons of munitions over a single city.
Military aviation in the post-war years was dominated by the needs of the Cold War. The postwar years saw a rapid conversion to jet power, which resulted in enormous increases in speeds and altitudes of aircraft. Until the advent of the intercontinental ballistic missile, major powers relied on high-altitude bombers to deliver their newly developed nuclear deterrent. Each country strove to develop the technology of bombers and the high-altitude fighters that could intercept them. The concept of air superiority began to play a heavy role in aircraft designs for both the United States and the Soviet Union.
Post-World War II: With the invention of nuclear weapons, the concept of full-scale war carries the prospect of global annihilation, and as such conflicts since WWII have been "low intensity" conflicts, typically in the form of proxy wars fought within local regional confines, using what are now referred to as "conventional weapons", typically combined with the use of asymmetric warfare tactics and applied use of intelligence.
Nuclear warfare: The use of nuclear weapons first came into being during the last months of WWII, with the dropping of atomic bombs on Hiroshima and Nagasaki. This was the only use of nuclear weapons in combat. For a decade after World War II, the United States and later the Soviet Union (and to a lesser extent the United Kingdom and France) developed and maintained a strategic force of bombers that would be able to attack any potential aggressor from bases inside their countries.
Before the development of a capable strategic missile force in the Soviet Union, much of the war-fighting doctrine held by western nations revolved around the use of a large number of smaller nuclear weapons used in a tactical role. It is arguable if such use could be considered "limited" however, because it was believed that the US would use their own strategic weapons (mainly bombers at the time) should the USSR deploy any kind of nuclear weapon against civilian targets.
A revolution in thinking occurred with the introduction of the intercontinental ballistic missile (ICBM), which the Soviet Union first successfully tested in the late 1950s. To deliver a warhead to a target, a missile was far less expensive than a bomber that could do the same job. Moreover, at the time it was impossible to intercept ICBMs due to their high altitude and speed.
In the 1960s, another major shift in nuclear doctrine occurred with the development of the submarine-based nuclear missile (SLBM). It was hailed by military theorists as a weapon that would assure a surprise attack would not destroy the capability to retaliate, and therefore would make nuclear war less likely.
Cold War: Since the end of WWII, no industrial nations have fought such a large, decisive war, due to the availability of weapons that are so destructive that their use would offset the advantages of victory. The fighting of a total war where nuclear weapons are used is something that instead of taking years and the full mobilisation of a country's resources such as in WWII, would take tens of minutes. Such weapons are developed and maintained with relatively modest peace time defence budgets.
By the end of the 1950s, the ideological stand-off of the Cold War between the Western World and the Soviet Union involved thousands of nuclear weapons being aimed at each side by the other. Strategically, the equal balance of destructive power possessed by each side situation came to be known as Mutually Assured Destruction (MAD), the idea that a nuclear attack by one superpower would result in nuclear counter-strike by the other. This would result in hundreds of millions of deaths in a world where, in words widely attributed to Nikita Khrushchev, "The living will envy the dead".
During the Cold War, the superpowers sought to avoid open conflict between their respective forces, as both sides recognized that such a clash could very easily escalate, and quickly involve nuclear weapons. Instead, the superpowers fought each other through their involvement in proxy wars, military buildups, and diplomatic standoffs.
In the case of proxy wars, each superpower supported its respective allies in conflicts with forces aligned with the other superpower, such as in the Korean War, the Vietnam War, and the Soviet invasion of Afghanistan.
21st century: The Royal United Services Institute stated that the Russo-Ukrainian War has proven that the age of industrial warfare is still here and that massive consumption of equipment, vehicles and ammunition requires a large industrial base for resupply.
Milestones:
See also: Mobilization
Trench warfare
Unconditional surrender
World war
Material aspects:
Arms race
Economic warfare
Home front
Mass production
Total war
War economy
War effort
Specific:
Cold War
Curtis LeMay
Technology during World War I
Technology during World War II
Technological escalation during World War II
Unrestricted Warfare (China)
References:
External links: Modern Tendencies in Strategy and Tactics as shown in Campaigns in the Far East (1906) by Lieutenant Colonel Yoda, Imperial Japanese Army. |
mil_tactics_continued_pretraining.csv | Infantry | Etymology and terminology: In English, use of the term infantry began about the 1570s, describing soldiers who march and fight on foot. The word derives from Middle French infanterie, from older Italian (also Spanish) infanteria (foot soldiers too inexperienced for cavalry), from Latin īnfāns (without speech, newborn, foolish), from which English also gets infant. The individual-soldier term infantryman was not coined until 1837. In modern usage, foot soldiers of any era are now considered infantry and infantrymen.
From the mid-18th century until 1881, the British Army named its infantry as numbered regiments "of Foot" to distinguish them from cavalry and dragoon regiments (see List of Regiments of Foot).
Infantry equipped with special weapons were often named after that weapon, such as grenadiers for their grenades, or fusiliers for their fusils. These names can persist long after the weapon speciality; examples of infantry units that retained such names are the Royal Irish Fusiliers and the Grenadier Guards.
Dragoons were created as mounted infantry, with horses for travel between battles; they were still considered infantry since they dismounted before combat. However, if light cavalry was lacking in an army, any available dragoons might be assigned their duties; this practice increased over time, and dragoons eventually received all the weapons and training as both infantry and cavalry, and could be classified as both. Conversely, starting about the mid-19th century, regular cavalry have been forced to spend more of their time dismounted in combat due to the ever-increasing effectiveness of enemy infantry firearms. Thus most cavalry transitioned to mounted infantry. As with grenadiers, the dragoon and cavalry designations can be retained long after their horses, such as in the Royal Dragoon Guards, Royal Lancers, and King's Royal Hussars.
Similarly, motorised infantry have trucks and other unarmed vehicles for non-combat movement, but are still infantry since they leave their vehicles for any combat. Most modern infantry have vehicle transport, to the point where infantry being motorised is generally assumed, and the few exceptions might be identified as modern light infantry. Mechanised infantry go beyond motorised, having transport vehicles with combat abilities, armoured personnel carriers (APCs), providing at least some options for combat without leaving their vehicles. In modern infantry, some APCs have evolved to be infantry fighting vehicles (IFVs), which are transport vehicles with more substantial combat abilities, approaching those of light tanks. Some well-equipped mechanised infantry can be designated as armoured infantry. Given that infantry forces typically also have some tanks, and given that most armoured forces have more mechanised infantry units than tank units in their organisation, the distinction between mechanised infantry and armour forces has blurred.
History: The first military forces in history were infantry. In antiquity, infantry were armed with early melee weapons such as a spear, axe, or sword, or an early ranged weapon like a javelin, sling, or bow, with a few infantrymen being expected to use both a melee and a ranged weapon. With the development of gunpowder, infantry began converting to primarily firearms. By the time of Napoleonic warfare, infantry, cavalry and artillery formed a basic triad of ground forces, though infantry usually remained the most numerous. With armoured warfare, armoured fighting vehicles have replaced the horses of cavalry, and airpower has added a new dimension to ground combat, but infantry remains pivotal to all modern combined arms operations.
The first warriors, adopting hunting weapons or improvised melee weapons, before the existence of any organised military, likely started essentially as loose groups without any organisation or formation. But this changed sometime before recorded history; the first ancient empires (2500–1500 BC) are shown to have some soldiers with standardised military equipment, and the training and discipline required for battlefield formations and manoeuvres: regular infantry. Though the main force of the army, these forces were usually kept small due to their cost of training and upkeep, and might be supplemented by local short-term mass-conscript forces using the older irregular infantry weapons and tactics; this remained a common practice almost up to modern times.
Before the adoption of the chariot to create the first mobile fighting forces c. 2000 BC, all armies were pure infantry. Even after, with a few exceptions like the Mongol Empire, infantry has been the largest component of most armies in history.
In the Western world, from Classical Antiquity through the Middle Ages (c. 8th century BC to 15th century AD), infantry are categorised as either heavy infantry or light infantry. Heavy infantry, such as Greek hoplites, Macedonian phalangites, and Roman legionaries, specialised in dense, solid formations driving into the main enemy lines, using weight of numbers to achieve a decisive victory, and were usually equipped with heavier weapons and armour to fit their role. Light infantry, such as Greek peltasts, Balearic slingers, and Roman velites, using open formations and greater manoeuvrability, took on most other combat roles: scouting, screening the army on the march, skirmishing to delay, disrupt, or weaken the enemy to prepare for the main forces' battlefield attack, protecting them from flanking manoeuvers, and then afterwards either pursuing the fleeing enemy or covering their army's retreat.
After the fall of Rome, the quality of heavy infantry declined, and warfare was dominated by heavy cavalry, such as knights, forming small elite units for decisive shock combat, supported by peasant infantry militias and assorted light infantry from the lower classes. Towards the end of Middle Ages, this began to change, where more professional and better trained light infantry could be effective against knights, such as the English longbowmen in the Hundred Years' War. By the start of the Renaissance, the infantry began to return to a larger role, with Swiss pikemen and German Landsknechts filling the role of heavy infantry again, using dense formations of pikes to drive off any cavalry.
Dense formations are vulnerable to ranged weapons. Technological developments allowed the raising of large numbers of light infantry units armed with ranged weapons, without the years of training expected for traditional high-skilled archers and slingers. This started slowly, first with crossbowmen, then hand cannoneers and arquebusiers, each with increasing effectiveness, marking the beginning of early modern warfare, when firearms rendered the use of heavy infantry obsolete. The introduction of musketeers using bayonets in the mid 17th century began replacement of the pike with the infantry square replacing the pike square.
To maximise their firepower, musketeer infantry were trained to fight in wide lines facing the enemy, creating line infantry. These fulfilled the central battlefield role of earlier heavy infantry, using ranged weapons instead of melee weapons. To support these lines, smaller infantry formations using dispersed skirmish lines were created, called light infantry, fulfilling the same multiple roles as earlier light infantry. Their arms were no lighter than line infantry; they were distinguished by their skirmish formation and flexible tactics.
The modern rifleman infantry became the primary force for taking and holding ground on battlefields as an element of combined arms. As firepower continued to increase, use of infantry lines diminished, until all infantry became light infantry in practice. Modern classifications of infantry have since expanded to reflect modern equipment and tactics, such as motorised infantry, mechanised or armoured infantry, mountain infantry, marine infantry, and airborne infantry.
Equipment: Beyond main arms and armour, an infantryman's "military kit" generally includes combat boots, battledress or combat uniform, camping gear, heavy weather gear, survival gear, secondary weapons and ammunition, weapon service and repair kits, health and hygiene items, mess kit, rations, filled water canteen, and all other consumables each infantryman needs for the expected duration of time operating away from their unit's base, plus any special mission-specific equipment. One of the most valuable pieces of gear is the entrenching tool—basically a folding spade—which can be employed not only to dig important defences, but also in a variety of other daily tasks, and even sometimes as a weapon. Infantry typically have shared equipment on top of this, like tents or heavy weapons, where the carrying burden is spread across several infantrymen. In all, this can reach 25–45 kg (60–100 lb) for each soldier on the march. Such heavy infantry burdens have changed little over centuries of warfare; in the late Roman Republic, legionaries were nicknamed "Marius' mules" as their main activity seemed to be carrying the weight of their legion around on their backs, a practice that predates the eponymous Gaius Marius.
When combat is expected, infantry typically switch to "packing light", meaning reducing their equipment to weapons, ammunition, and other basic essentials, and leaving other items deemed unnecessary with their transport or baggage train, at camp or rally point, in temporary hidden caches, or even (in emergencies) simply discarding the items. Additional specialised equipment may be required, depending on the mission or to the particular terrain or environment, including satchel charges, demolition tools, mines, or barbed wire, carried by the infantry or attached specialists.
Historically, infantry have suffered high casualty rates from disease, exposure, exhaustion and privation — often in excess of the casualties suffered from enemy attacks. Better infantry equipment to support their health, energy, and protect from environmental factors greatly reduces these rates of loss, and increase their level of effective action. |
mil_tactics_continued_pretraining.csv | Infantry | When combat is expected, infantry typically switch to "packing light", meaning reducing their equipment to weapons, ammunition, and other basic essentials, and leaving other items deemed unnecessary with their transport or baggage train, at camp or rally point, in temporary hidden caches, or even (in emergencies) simply discarding the items. Additional specialised equipment may be required, depending on the mission or to the particular terrain or environment, including satchel charges, demolition tools, mines, or barbed wire, carried by the infantry or attached specialists.
Historically, infantry have suffered high casualty rates from disease, exposure, exhaustion and privation — often in excess of the casualties suffered from enemy attacks. Better infantry equipment to support their health, energy, and protect from environmental factors greatly reduces these rates of loss, and increase their level of effective action. Health, energy, and morale are greatly influenced by how the soldier is fed, so militaries issue standardised field rations that provide palatable meals and enough calories to keep a soldier well-fed and combat-ready.
Communications gear has become a necessity, as it allows effective command of infantry units over greater distances, and communication with artillery and other support units. Modern infantry can have GPS, encrypted individual communications equipment, surveillance and night vision equipment, advanced intelligence and other high-tech mission-unique aids.
Armies have sought to improve and standardise infantry gear to reduce fatigue for extended carrying, increase freedom of movement, accessibility, and compatibility with other carried gear, such as the American All-purpose Lightweight Individual Carrying Equipment (ALICE).
Weapons: Infantrymen are defined by their primary arms – the personal weapons and body armour for their own individual use. The available technology, resources, history, and society can produce quite different weapons for each military and era, but common infantry weapons can be distinguished in a few basic categories.
Ranged combat weapons: javelins, slings, blowguns, bows, crossbows, hand cannons, arquebuses, muskets, grenades, flamethrowers.
Melee combat weapons: bludgeoning weapons like clubs, flails and maces; bladed weapons like swords, daggers, and axes; polearms like spears, halberds, naginata, and pikes.
Both ranged and close weapons: the bayonet fixed to a firearm allows infantrymen to use the same weapon for both ranged combat and close combat. This started with muskets and its use still continues with modern assault rifles. Use of the bayonet has declined with the introduction of automatic firearms, but are still generally kept as a weapon of last resort.
Infantrymen often carry secondary or back-up weapons, sometimes called a sidearm or ancillary weapons. Infantry with ranged or polearms often carried a sword or dagger for possible hand-to-hand combat. The pilum was a javelin the Roman legionaries threw just before drawing their primary weapon, the gladius (short sword), and closing with the enemy line.
Modern infantrymen now treat the bayonet as a backup weapon, but may also have handguns as sidearms. They may also deploy anti-personnel mines, booby traps, incendiary, or explosive devices defensively before combat.
Protection: Infantry have employed many different methods of protection from enemy attacks, including various kinds of armour and other gear, and tactical procedures.
The most basic is personal armour. This includes shields, helmets and many types of armour – padded linen, leather, lamellar, mail, plate, and kevlar. Initially, armour was used to defend both from ranged and close combat; even a fairly light shield could help defend against most slings and javelins, though high-strength bows and crossbows might penetrate common armour at very close range. Infantry armour had to compromise between protection and coverage, as a full suit of attack-proof armour would be too heavy to wear in combat.
As firearms improved, armour for ranged defence had to be made thicker and heavier, which hindered mobility. With the introduction of the heavy arquebus designed to pierce standard steel armour, it was proven easier to make heavier firearms than heavier armour; armour transitioned to be only for close combat purposes. Pikemen armour tended to be just steel helmets and breastplates, and gunners had very little or no armour at all. By the time of the musket, the dominance of firepower shifted militaries away from any close combat, and use of armour decreased, until infantry typically went without wearing any armour.
Helmets were added back during World War I as artillery began to dominate the battlefield, to protect against their fragmentation and other blast effects beyond a direct hit. Modern developments in bullet-proof composite materials like kevlar have started a return to body armour for infantry, though the extra weight is a notable burden.
In modern times, infantrymen must also often carry protective measures against chemical and biological attack, including military gas masks, counter-agents, and protective suits. All of these protective measures add to the weight an infantryman must carry, and may decrease combat efficiency.
Infantry-served weapons: Early crew-served weapons were siege weapons, like the ballista, trebuchet, and battering ram. Modern versions include machine guns, anti-tank missiles, and infantry mortars.
Formations: Beginning with the development the first regular military forces, close-combat regular infantry fought less as unorganised groups of individuals and more in coordinated units, maintaining a defined tactical formation during combat, for increased battlefield effectiveness; such infantry formations and the arms they used developed together, starting with the spear and the shield.
A spear has decent attack abilities with the additional advantage keeping opponents at distance; this advantage can be increased by using longer spears, but this could allow the opponent to side-step the point of the spear and close for hand-to-hand combat where the longer spear is near useless. This can be avoided when each spearman stays side by side with the others in close formation, each covering the ones next to him, presenting a solid wall of spears to the enemy that they cannot get around.
Similarly, a shield has decent defence abilities, but is literally hit-or-miss; an attack from an unexpected angle can bypass it completely. Larger shields can cover more, but are also heavier and less manoeuvrable, making unexpected attacks even more of a problem. This can be avoided by having shield-armed soldiers stand close together, side-by-side, each protecting both themselves and their immediate comrades, presenting a solid shield wall to the enemy.
The opponents for these first formations, the close-combat infantry of more tribal societies, or any military without regular infantry (so called "barbarians") used arms that focused on the individual – weapons using personal strength and force, such as larger swinging swords, axes, and clubs. These take more room and individual freedom to swing and wield, necessitating a more loose organisation. While this may allow for a fierce running attack (an initial shock advantage) the tighter formation of the heavy spear and shield infantry gave them a local manpower advantage where several might be able to fight each opponent.
Thus tight formations heightened advantages of heavy arms, and gave greater local numbers in melee. To also increase their staying power, multiple rows of heavy infantrymen were added. This also increased their shock combat effect; individual opponents saw themselves literally lined-up against several heavy infantryman each, with seemingly no chance of defeating all of them. Heavy infantry developed into huge solid block formations, up to a hundred meters wide and a dozen rows deep.
Maintaining the advantages of heavy infantry meant maintaining formation; this became even more important when two forces with heavy infantry met in battle; the solidity of the formation became the deciding factor. Intense discipline and training became paramount. Empires formed around their military.
Organization: The organization of military forces into regular military units is first noted in Egyptian records of the Battle of Kadesh (c. 1274 BC). Soldiers were grouped into units of 50, which were in turn grouped into larger units of 250, then 1,000, and finally into units of up to 5,000 – the largest independent command. Several of these Egyptian "divisions" made up an army, but operated independently, both on the march and tactically, demonstrating sufficient military command and control organisation for basic battlefield manoeuvres. Similar hierarchical organizations have been noted in other ancient armies, typically with approximately 10 to 100 to 1,000 ratios (even where base 10 was not common), similar to modern sections (squads), companies, and regiments.
Training: The training of the infantry has differed drastically over time and from place to place. The cost of maintaining an army in fighting order and the seasonal nature of warfare precluded large permanent armies.
The antiquity saw everything from the well-trained and motivated citizen armies of Greece and Rome, the tribal host assembled from farmers and hunters with only passing acquaintance with warfare and masses of lightly armed and ill-trained militia put up as a last ditch effort. Kushite king Taharqa enjoyed military success in the Near East as a result of his efforts to strengthen the army through daily training in long-distance running.
In medieval times the foot soldiers varied from peasant levies to semi-permanent companies of mercenaries, foremost among them the Swiss, English, Aragonese and German, to men-at-arms who went into battle as well-armoured as knights, the latter of which at times also fought on foot.
The creation of standing armies—permanently assembled for war or defence—saw increase in training and experience. The increased use of firearms and the need for drill to handle them efficiently. |
mil_tactics_continued_pretraining.csv | Infantry | The antiquity saw everything from the well-trained and motivated citizen armies of Greece and Rome, the tribal host assembled from farmers and hunters with only passing acquaintance with warfare and masses of lightly armed and ill-trained militia put up as a last ditch effort. Kushite king Taharqa enjoyed military success in the Near East as a result of his efforts to strengthen the army through daily training in long-distance running.
In medieval times the foot soldiers varied from peasant levies to semi-permanent companies of mercenaries, foremost among them the Swiss, English, Aragonese and German, to men-at-arms who went into battle as well-armoured as knights, the latter of which at times also fought on foot.
The creation of standing armies—permanently assembled for war or defence—saw increase in training and experience. The increased use of firearms and the need for drill to handle them efficiently.
The introduction of national and mass armies saw an establishment of minimum requirements and the introduction of special troops (first of them the engineers going back to medieval times, but also different kinds of infantry adopted to specific terrain, bicycle, motorcycle, motorised and mechanised troops) culminating with the introduction of highly trained special forces during the first and second World War.
Air force and naval infantry: Naval infantry, commonly known as marines, are primarily a category of infantry that form part of the naval forces of states and perform roles on land and at sea, including amphibious operations, as well as other, naval roles. They also perform other tasks, including land warfare, separate from naval operations.
Air force infantry and base defense forces, such as the United States Air Force Security Forces, Royal Air Force Regiment, Royal Australian Air Force Airfield Defence Guards, and Indonesian Air Force Paskhas Corps are used primarily for ground-based defense of air bases and other air force facilities. They also have a number of other, specialist roles. These include, among others, Chemical, Biological, Radiological and Nuclear (CBRN) defence and training other airmen in basic ground defense tactics. Steve Jeanes
See also:
Notes:
References:
Citations: Infentory
Sources:
External links:
Historic films and photos showing Infantries in World War I at europeanfilmgateway.eu
In Praise of Infantry, by Field-Marshal Earl Wavell; First published in "The Times", Thursday, 19 April 1945.
The Lagunari "Serenissima" Regiment KFOR: KFOR Chronicle.
Web Version of U.S. Army Field Manual 3–21.8 – The Infantry Rifle Platoon and Squad.
"Infantry" . Encyclopædia Britannica. Vol. 14 (11th ed.). 1911. pp. 517–533. — includes several drawings |
mil_tactics_continued_pretraining.csv | Information warfare | Overview: Information warfare has been described as "the use of information to achieve our national objectives." According to NATO, "Information war is an operation conducted in order to gain an information advantage over the opponent."
Information warfare can take many forms:
Television, internet and radio transmission(s) can be jammed to disrupt communications, or hijacked for a disinformation campaign.
Logistics networks can be disabled.
Enemy communications networks can be disabled or spoofed, especially online social communities in modern days.
Stock exchange transactions can be sabotaged, either with electronic intervention, by leaking sensitive information or by placing disinformation.
The use of drones and other surveillance robots or webcams.
Communication management
Synthetic media
The organized use of social media and other online content-generation platforms can be used to influence public perceptions.
The United States Air Force has had Information Warfare Squadrons since the 1980s. In fact, the official mission of the U.S. Air Force is now "To fly, fight and win... in air, space and cyberspace", with the latter referring to its information warfare role.
As the U.S. Air Force often risks aircraft and aircrews to attack strategic enemy communications targets, remotely disabling such targets using software and other means can provide a safer alternative. In addition, disabling such networks electronically (instead of explosively) also allows them to be quickly re-enabled after the enemy territory is occupied. Similarly, counter-information warfare units are employed to deny such capability to the enemy. The first application of these techniques was used against Iraqi communications networks in the Gulf War.
Also during the Gulf War, Dutch hackers allegedly stole information about U.S. troop movements from U.S. Defense Department computers and tried to sell it to the Iraqis, who thought it was a hoax and turned it down. In January 1999, U.S. Air Intelligence computers were hit by a coordinated attack (Moonlight Maze), part of which came from a Russian mainframe. This could not be confirmed as a Russian cyber attack due to non-attribution – the principle that online identity may not serve as proof of real-world identity.
New battlefield: Within the realm of cyberspace, there are two primary weapons: network-centric warfare and C4ISR, which denotes integrated Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance. Furthermore, cyberspace attacks initiated by one nation against another nation have an underlying goal of gaining information superiority over the attacked party, which includes disrupting or denying the victimized party's ability to gather and distribute information. A real-world occurrence that illustrated the dangerous potential of cyberattacks transpired in 2007, when a strike from Israeli forces demolished an alleged nuclear reactor in Syria that was being constructed via a collaborative effort between Syria and North Korea. Accompanied by the strike was a cyberattack on Syria's air defenses, which left them blind to the attack on the nuclear reactor and, ultimately allowed for the attack to occur (New York Times 2014). An example of a more basic attack on a nation within cyberspace is a distributed denial of service (DDOS) attack, which is utilized to hinder networks or websites until they lose their primary functionality. As implied, cyberattacks do not just affect the military party being attacked, but rather the whole population of the victimized nation. Since more aspects of daily life are being integrated into networks in cyberspace, civilian populations can potentially be negatively affected during wartime. For example, if a nation chose to attack another nation's power grid servers in a specific area to disrupt communications, civilians and businesses in that area would also have to deal with power outages, which could potentially lead to economic disruptions as well.
Moreover, physical ICTs have also been implemented into the latest revolution in military affairs by deploying new, more autonomous robots (i.e. – unmanned drones) into the battlefield to carry out duties such as patrolling borders and attacking ground targets. Humans from remote locations pilot many of the unmanned drones, however, some of the more advanced robots, such as the Northrop Grumman X-47B, are capable of autonomous decisions. Despite piloting drones from remote locations, a proportion of drone pilots still suffer from stress factors of more traditional warfare. According to NPR, a study performed by the Pentagon in 2011 found that 29% of drone pilots are "burned out" and undergo high levels of stress. Furthermore, approximately 17% of the drone pilots surveyed as the study were labeled "clinically distressed" with some of those pilots also showing signs of post-traumatic stress disorder.
Modern ICTs have also brought advancements to communications management among military forces. Communication is a vital aspect of war for any involved party and, through the implementation of new ICTs such as data-enabled devices, military forces are now able to disseminate information faster than ever before. For example, some militaries are now employing the use of iPhones to upload data and information gathered by drones in the same area.
Notable examples:
Chinese information warfare:
Russo-Ukrainian War: In 2022, the Armed Forces of Ukraine have taken advantage of deficiencies in Russian communications by allowing them to piggyback on Ukrainian networks, connect, and communicate. Ukrainian forces then eavesdrop, and cut off Russian communications at a crucial part of the conversation.
To build support before it invaded Ukraine, Russia perpetuated a narrative that claimed the Ukrainian government was committing violence against its own Russian speaking population. By publishing large amounts of disinformation on the internet, the alternate narrative was picked up in search results, such as Google News.
Russian interference in foreign elections: Russian interference in foreign elections, most notably the Russian interference in the 2016 United States elections, has been described as information warfare. Russia has also begun to interfere in the 2024 US presidential elections according to Microsoft. According to NBC, Russia is conducting disinformation campaigns in the 2024 US elections against US president, Joe Biden.
Russia vs West: Research suggests that Russia and the West are also engaged in an information war. For instance, Russia believes that the West is undermining its leader through the encouragement of overthrowing authoritarian regimes and liberal values. In response, Russia promotes the anti-liberal sentiments, including racism, antisemitism, homophobia, and misogyny. Russia has sought to promote the idea that the American democratic state is failing.
Russia, China and Pro Palestinian protests: The Telegraph reported in 2024 that China and Russia were promoting Pro Palestinian influencers in order to manipulate British public opinion in favour of Russian and Chinese interests. NBC reported that Russia was using different tools to cause division within the US, by delegitimizing US police operations against Pro Palestinian protests and by pivoting public conversation from the Russian invasion in Ukraine to the Israeli-Palestinian conflict. Russian media activity increased by 400% in the weeks after Hamas’ Oct. 7 attack on Israel.
United States COVID-19 disinformation campaign: According to a report by Reuters, the United States ran a propaganda campaign to spread disinformation about the Sinovac Chinese COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore haram under Islamic law. The campaign was described as "payback" for COVID-19 disinformation by China directed against the U.S. The campaign primarily targeted people in the Philippines and used a social media hashtag for "China is the virus" in Tagalog. The campaign ran from 2020 to mid-2021. The primary contractor for the U.S. military on the project was General Dynamics IT, which received $493 million for its role.
Legal and ethical concerns: While information warfare has yielded many advances in the types of attack that a government can make, it has also raised concerns about the moral and legal ambiguities surrounding this particularly new form of war. Traditionally, wars have been analyzed by moral scholars according to just war theory. However, with Information Warfare, Just War Theory fails because the theory is based on the traditional conception of war. Information Warfare has three main issues surrounding it compared to traditional warfare:
The risk for the party or nation initiating the cyberattack is substantially lower than the risk for a party or nation initiating a traditional attack. This makes it easier for governments, as well as potential terrorist or criminal organizations, to make these attacks more frequently than they could with traditional war.
Information communication technologies (ICT) are so immersed in the modern world that a very wide range of technologies are at risk of a cyberattack. Specifically, civilian technologies can be targeted for cyberattacks and attacks can even potentially be launched through civilian computers or websites. As such, it is harder to enforce control of civilian infrastructures than a physical space. Attempting to do so would also raise many ethical concerns about the right to privacy, making defending against such attacks even tougher.
The mass-integration of ICT into our system of war makes it much harder to assess accountability for situations that may arise when using robotic and/or cyber attacks. For robotic weapons and automated systems, it's becoming increasingly hard to determine who is responsible for any particular event that happens. This issue is exacerbated in the case of cyberattacks, as sometimes it is virtually impossible to trace who initiated the attack in the first place.
Recently, legal concerns have arisen centered on these issues, specifically the issue of the right to privacy in the United States of America. Lt. General Keith B. Alexander, who served as the head of Cyber Command under President Barack Obama, noted that there was a "mismatch between our technical capabilities to conduct operations and the governing laws and policies" when writing to the Senate Armed Services Committee. |
mil_tactics_continued_pretraining.csv | Information warfare | Attempting to do so would also raise many ethical concerns about the right to privacy, making defending against such attacks even tougher.
The mass-integration of ICT into our system of war makes it much harder to assess accountability for situations that may arise when using robotic and/or cyber attacks. For robotic weapons and automated systems, it's becoming increasingly hard to determine who is responsible for any particular event that happens. This issue is exacerbated in the case of cyberattacks, as sometimes it is virtually impossible to trace who initiated the attack in the first place.
Recently, legal concerns have arisen centered on these issues, specifically the issue of the right to privacy in the United States of America. Lt. General Keith B. Alexander, who served as the head of Cyber Command under President Barack Obama, noted that there was a "mismatch between our technical capabilities to conduct operations and the governing laws and policies" when writing to the Senate Armed Services Committee. A key point of concern was the targeting of civilian institutions for cyberattacks, to which the general promised to try to maintain a mindset similar to that of traditional war, in which they will seek to limit the impact on civilians.
See also: Group specific:
US specific:
Notes:
References:
Bibliography:
Books: Jerome Clayton Glenn, "Future Mind" Chapter 9. Defense p. 195-201. Acropolis Books LTD, Washington, DC (1989)
Winn Schwartau, "Information Warfare: Chaos on the Electronic Superhighway" Thunder's Mouth Press (1993)
Winn Schwartau, ed, Information Warfare: Cyberterrorism: Protecting your personal security in the electronic age, Thunder's Mouth Press, 2nd ed, (1996) (ISBN 1560251328).
John Arquilla and David Ronfeldt, In Athena's Camp, RAND (1997).
Dorothy Denning, Information Warfare and Security, Addison-Wesley (1998) (ISBN 0201433036).
James Adams, The Next World War: Computers are the Weapons and the Front line is Everywhere, Simon and Schuster (1998) (ISBN 0684834529).
Edward Waltz, Information Warfare Principles and Operations, Artech House, 1998, ISBN 0-89006-511-X
John Arquilla and David Ronfeldt, Networks and Netwars: The Future of Terror, Crime, and Militancy, RAND (2001) (ISBN 0833030302).
Ishmael Jones, The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture, Encounter Books, New York (2010) (ISBN 978-1594032233). Information/intelligence warfare.
Gregory J. Rattray, Strategic Warfare in Cyberspace, MIT Press (2001) (ISBN 0262182092).
Anthony H. Cordesman, Cyber-threats, Information Warfare, and Critical Infrastructure Protection: DEFENDING THE US HOMELAND (2002) (ISBN 0275974235).
Leigh Armistead, Information Operations: The Hard Reality of Soft Power, Joint Forces Staff College and the National Security Agency (2004) (ISBN 1574886991).
Thomas Rid, War and Media Operations: The US Military and the Press from Vietnam to Iraq, Routledge (2007) (ISBN 0415416590).
Other: Science at War: Information Warfare, The History Channel (1998).
External links:
Resources: Hacktivism and Politically Motivated Computer Crime (PDF)
Cyberspace and Information Operations Study Center Archived 2007-10-31 at the Wayback Machine, Air University, U.S. Air Force.
IWS - The Information Warfare Site
Information Warfare Monitor - Tracking Cyberpower (University of Toronto, Canada/Munk Centre)
Twitter: InfowarMonitor
Information Warfare, I-War, IW, C4I, Cyberwar
Federation of American Scientists - IW Resources Archived 2007-10-14 at the Wayback Machine
Association of Old Crows http://www.crows.org The Electronic Warfare and Information Operations Association.
C4I.org - Computer Security & Intelligence
Information Warfare, Information Operations and Electronic Attack Capabilities Air Power Australia.
Committee on Policy Consequences and Legal/Ethical Implications of Offensive Information Warfare Archived 2017-06-11 at the Wayback Machine, The National Academies.
Program on Information and Warfare, Global Information Society Project, World Policy Institute.
Information Warriors Archived 2020-10-27 at the Wayback Machine Information Warriors is web forum dedicated to the discussion of Navy Information Warfare.
Mastermind Corporation Information Warfare Tactics Analysis (PDF)
Information Warfare in Biology Nature's Exploitation of Information to Win Survival Contests, Monash University, Computer Science.
Course syllabi: COSC 511 Information Warfare: Terrorism, Crime, and National Security @ Department of Computer Science, Georgetown University (1997–2002) (Dorothy Denning).
CSE468 Information Conflict (Honours) @ School of Computer Science and Software Engineering, Monash University (2006) (Carlo Kopp).
Information Warfare, Cyberterrorism, and Hacktivism from Cybercrime, Cyberterrorism and Digital Law Enforcement, New York Law School.
Papers: research and theory: Col Andrew Borden, USAF (Ret.), What is Information Warfare? Aerospace Power Chronicles (1999).
Dr Carlo Kopp, A Fundamental Paradigm of Infowar (February 2000).
Research & Theory Links Archived 2007-10-30 at the Wayback Machine, Cyberspace and Information Operations Study Center, Air War College, Air University, U.S. Air Force.
Lachlan Brumley et al., Cutting Through the Tangled Web: An Information-Theoretic Perspective on Information Warfare (October 2012).
Galeotti, Mark (5 March 2018). "I'm Sorry for Creating the 'Gerasimov Doctrine'". Foreign Policy. Slate Group. Retrieved 19 March 2022.
Galeotti, Mark (2018). "The mythical 'Gerasimov Doctrine' and the language of threat". Critical Studies on Security. 7 (2). Informa UK Limited: 157–161. doi:10.1080/21624887.2018.1441623. ISSN 2162-4887. OCLC 8319522816. S2CID 159811828.
Michael MacDonald (2012) "Black Logos: Rhetoric and Information Warfare", pages 189–220 in Literature, Rhetoric and Values: Selected Proceedings of a Conference held at University of Waterloo, 3–5 June 2011, editors Shelley Hulan, Murray McArthur and Randy Allen Harris, Cambridge Scholars Publishing ISBN 978-1-4438-4175-7 .
Taddeo, Mariarosaria (2012). Information Warfare: A Philosophical Perspective. Philosophy and Technology 25 (1):105-120.
Papers: Other: An essay on Information Operations by Zachary P. Hubbard (Also Here (Doc file) from this page - working link added 31 May 2024)
News articles: Army, Air Force seek to go on offensive in cyber war, GovExec.com (June 13, 2007).
NATO says urgent need to tackle cyber attack, Reuters (June 14, 2007).
America prepares for 'cyber war' with China, Telegraph.uk.co (June 15, 2007).
NATO, US gear up for cyberpunk warfare, The Register (June 15, 2007).
United States Department of Defense IO Doctrine: Information Operations Roadmap (DOD 2003)
Information Operations (JP 3-13 2006)
Operations Security (JP 3-13.3) (PDF)
Military Deception (JP 3-13.4) (PDF)
Joint Doctrine for PSYOP (JP 3-53 2003)
Joint Doctrine for Public Affairs (JP 3-61 2005)
Destabilizing Terrorist Networks: Disrupting and Manipulating Information Flows in the Global War on Terrorism, Yale Information Society Project Conference Paper (2005).
Seeking Symmetry in Fourth Generation Warfare: Information Operations in the War of Ideas, Presentation (PDF slides) to the Bantle - Institute for National Security and Counterterrorism (INSCT) Symposium, Syracuse University (2006).
K. A. Taipale, Seeking Symmetry on the Information Front: Confronting Global Jihad on the Internet, 16 National Strategy F. Rev. 14 (Summer 2007). |
mil_tactics_continued_pretraining.csv | Investment (military) | Antiquity: Thucydides notes the role circumvallation played in the Sicilian Expedition and in the Spartan siege of Plataea during the initial stages of the Peloponnesian War in 429 BC.
Julius Caesar in his Commentaries on the Gallic War describes his textbook use of the circumvallation to defeat the Gauls under their chieftain, Vercingetorix, at the Siege of Alesia in September 52 BC.
During the Siege of Jerusalem, Titus and his Roman legions built a circumvallation, cutting down all trees within fifteen kilometres (9 miles).
Middle Ages: Another example from the pre-modern period is the Siege of Constantinople (717–718).
The caliph of the Umayyad Empire took advantage of the violent anarchy in the Byzantine Empire to prepare a huge host, comprising more than 100,000 troops and 1,800 ships, to take them to the Byzantine capital, Constantinople. Upon arriving outside the city's Theodosian walls, the Arab army had some knowledge that Emperor Leo III the Isaurian had allied with Bulgaria under Khan Tervel, and so, in preparation for the Bulgarian army, built a set of stone walls against the city and against the countryside, with the Arab camp in between.
King Pepin the Short of Francia built a number of fortified camps during his Siege of Bourbon (761) to surround the town completely. He built a complete set of lines of circumvallation and contravallation during the Siege of Bourges (762).
Modern era: The basic objectives and tactics of a military investment have remained the same in the modern era. During the Second World War, there were many sieges and many investments. One of the best-known sieges of the war, which demonstrated the tactical use of investment, was the
Siege of Stalingrad. During the first half of the siege, the Germans were unable to fully encircle the city and so the Soviets got men and supplies in across the Volga River. During the second half of the battle, the complete investment of Stalingrad by the Soviets, including airspace, which prevented the construction by the Germans of an adequately large airbridge, eventually forced the starving Germans in the city to surrender.
In modern times, investments and sieges of cities are often combined with intensive shelling, air strikes and extensive use of land and/or sea mines.
See also: Encirclement
List of established military terms
References:
Sources: Petersen, Leif Inge Ree (2013). Siege Warfare and Military Organization in the Successor States (400–800 AD): Byzantium, the West and Islam. Leiden: Brill. ISBN 978-90-04-25199-1. |
mil_tactics_continued_pretraining.csv | Irregular military | Regular vs. irregular: The words "regular" and "irregular" have been used to describe combat forces for hundreds of years, usually with little ambiguity. The requirements of a government's chain of command cause the regular army to be very well defined, and anybody fighting outside it, other than official paramilitary forces, are irregular. In case the legitimacy of the army or its opponents is questioned, some legal definitions have been created.
In international humanitarian law, the term "irregular forces" refers to a category of combatants that consists of individuals forming part of the armed forces of a party to an armed conflict, international or domestic, but not belonging to that party's regular forces and operating inside or outside of their own territory, even if the territory is under occupation.
The Third Geneva Convention of 1949 uses "regular armed forces" as a critical distinction. The International Committee of the Red Cross (ICRC) is a non-governmental organization primarily responsible for and most closely associated with the drafting and successful completion of the Third Geneva Convention Relative to the Treatment of Prisoners of War ("GPW"). The ICRC provided commentary saying that "regular armed forces" satisfy four Hague Conventions (1899 and 1907) (Hague IV) conditions. In other words, "regular forces" must satisfy the following criteria:
being commanded by a person responsible for his subordinates to a party of conflict
having a fixed distinctive emblem recognizable at a distance
carrying arms openly
conducting operations in accordance with the laws and customs of war
By extension, combat forces that do not satisfy these criteria are termed "irregular forces".
Types: The term "irregular military" describes the "how" and "what", but it is more common to focus on the "why" as just about all irregular units were created to provide a tactical advantage to an existing military, whether it was privateer forces harassing shipping lanes against assorted New World colonies on behalf of their European contractors, or Auxiliaries, levies, civilian and other standing irregular troops that are used as more expendable supplements to assist costly trained soldiers. Bypassing the legitimate military and taking up arms is an extreme measure. The motivation for doing so is often used as the basis of the primary label for any irregular military. Different terms come into and out of fashion, based on political and emotional associations that develop. Here is a list of such terms, which is organized more or less from oldest to latest:
Auxiliaries – foreign or allied troops supplementing the regular army, organized from provincial or tribal regions. In the Imperial Roman army, it became common to maintain a number of auxiliaries about equal to the legionaries.
Levies – feudal peasants and freemen liable to be called up for short-term military duty.
Privateer – a "for-profit" private person or ship authorized and sponsored by a government by letters of marque to attack foreign vessels during wartime and to destroy or disrupt logistics of the enemy during "peacetime", often on the open sea by attacking its merchant shipping, rather than engaging its combatants or enforcing a blockade against them.
Revolutionary – someone part of a revolution, whether military or not.
Guerrilla – someone who uses unconventional military tactics. The term tends to refer to groups engaged in open conflict, rather than underground resistance. It was coined during the Peninsula War in Spain against France.
Montoneras – they were a type of irregular forces that were formed in the 19th century in Latin America.
Franc-tireur – French irregular forces during the Franco-Prussian War. The term is also used in international legal cases as a synonym for unprivileged combatant (for example the Hostages Trial [1947–1948]).
Militia – military force composed of ordinary citizens.
Ordenanças – The Portuguese territorial militia system from the 16th century to the 19th century. From the 17th century, it became the third line of the Army, serving both as local defense force and as the mobilization system that provided conscripts for the first (Regular) and second (Militia) lines of the Army.
Partisan – In the 20th century, someone part of a resistance movement. In the 18th and 19th century, a local conventional military force using irregular tactics. Often used to refer to resistance movements against the Axis Powers during the Second World War.
Freedom fighter – A type of irregular military in which the main cause, in their or their supporters' view, is freedom for themselves or others.
Paramilitary – An organization whose structure, tactics, training, subculture, and (often) function are similar to those of a professional military, but which is not part of a country's official or legitimate armed forces.
Terrorist – An irregular military that targets civilians and other non-combatants to gain political leverage. The term is almost always used pejoratively. Although reasonably well defined, its application is frequently controversial.
False flag or pseudo-operations – Troops of one side dressing like troops of another side to eliminate or discredit the latter and its support, such as members of the Panzer Brigade 150, commanded by Waffen-SS commando Otto Skorzeny in Operation Greif during the Battle of the Bulge in World War II and Selous Scouts of the Rhodesian Bush War.
Insurgent – An alternate term for a member of an irregular military that tends to refer to members of underground groups such as the Iraqi Insurgency, rather than larger rebel organizations like the Revolutionary Armed Forces of Colombia.
Fifth column - A group that carries out sabotage, disinformation, espionage, and/or terrorism within a group that responds to external enemies
Bandit - It is generally treated as an organized crime, but it has the character of a resistance movement depending on the political and social situation.
Private army - Combatants who owe their allegiance to a private person, group, or organization.
Mercenary or "soldier of fortune" – Someone who is generally not a national in a standing army or not otherwise an inherently-invested party to an armed conflict who becomes involved in an armed conflict for monetary motives or for private gain. Mercenaries are often explicitly hired to fight or provide manpower or expertise in exchange for money; material wealth or, less commonly, political power. Mercenaries are often experienced combatants or former regular soldiers who decided to sell their combat experience, skill or manpower to interested parties or to the highest bidder in an armed conflict. Famous historic examples of "professional" or organized (often "career") mercenaries include the Italian condottieri, or "contractors", leaders of "free agent" mercenary armies that provided their armies to the various Italian city-states and the Papal states during the Late Middle Ages and Renaissance Italy in exchange for profit, land or power. However, not all soldiers deemed to be "mercenaries" are "professional" or "career" mercenaries, and many mercenaries may be simply opportunists or persons with no prior combat experience. Whether a combatant is truly a "mercenary" may be a matter of controversy or degree, as financial and national interests often overlap, and most standing regular armies also provide their soldiers with some form of payment. Furthermore, as reflected in the Geneva Convention, mercenaries are generally provided less protection under the rules of war than non-mercenaries, and many countries have criminalized "mercenary activity".
Intense debates can build up over which term is to be used to refer to a specific group. Using one term over another can strongly imply strong support or opposition for the cause.
It is possible for a military to cross the line between regular and irregular. Isolated regular army units that are forced to operate without regular support for long periods of time can degrade into irregulars. As an irregular military becomes more successful, it may transition away from irregular, even to the point of becoming the new regular army if it wins.
Regular military units that use irregular military tactics: Most conventional military officers and militaries are wary of using irregular military forces and see them as unreliable, of doubtful military usefulness, and prone to committing atrocities leading to retaliation in kind. Usually, such forces are raised outside the regular military like the British SOE during World War II and, more recently, the CIA's Special Activities Center. However at times, such as out of desperation, conventional militaries will resort to guerilla tactics, usually to buy breathing space and time for themselves by tying up enemy forces to threaten their line of communications and rear areas, such as the 43rd Battalion Virginia Cavalry and the Chindits.
Although they are part of a regular army, United States Special Forces are trained in missions such as implementing irregular military tactics. However, outside the United States, the term special forces does not generally imply a force that is trained to fight as guerillas and insurgents. Originally, the United States Special Forces were created to serve as a cadre around which stay-behind resistance forces could be built in the event of a communist victory in Europe or elsewhere. The United States Special Forces and the CIA's Special Activities Center can trace their lineage to the OSS operators of World War II, which were tasked with inspiring, training, arming and leading resistance movements in German-occupied Europe and Japanese occupied Asia.
In Finland, well-trained light infantry Sissi troops use irregular tactics such as reconnaissance, sabotage and guerrilla warfare behind enemy lines.
The founder of the People's Republic of China, Mao Zedong actively advocated for the use of irregular military tactics by regular military units. |
mil_tactics_continued_pretraining.csv | Irregular military | Although they are part of a regular army, United States Special Forces are trained in missions such as implementing irregular military tactics. However, outside the United States, the term special forces does not generally imply a force that is trained to fight as guerillas and insurgents. Originally, the United States Special Forces were created to serve as a cadre around which stay-behind resistance forces could be built in the event of a communist victory in Europe or elsewhere. The United States Special Forces and the CIA's Special Activities Center can trace their lineage to the OSS operators of World War II, which were tasked with inspiring, training, arming and leading resistance movements in German-occupied Europe and Japanese occupied Asia.
In Finland, well-trained light infantry Sissi troops use irregular tactics such as reconnaissance, sabotage and guerrilla warfare behind enemy lines.
The founder of the People's Republic of China, Mao Zedong actively advocated for the use of irregular military tactics by regular military units. In his book On Guerrilla Warfare, Mao described seven types of Guerilla units, and argues that "regular army units temporarily detailed for the purpose (of guerilla warfare)," "regular army units permanently detailed (for the purpose of guerilla warfare)," and bands of guerillas created "through a combination of a regular army unit and a unit recruited from the people" were all examples of ways in which regular military units could be involved in irregular warfare. Mao argues that regular army units temporarily detailed for irregular warfare are essential because "First, in mobile-warfare situations, the coordination of guerilla activities with regular operations is necessary. Second, until guerilla hostilities can be developed on a grand scale, there is no one to carry out guerilla missions but regulars." He also emphasizes the importance for the use of regular units permanently attached to guerilla warfare activities, stating that they can play key roles in severing enemy supply routes.
Effectiveness: While the morale, training and equipment of the individual irregular soldier can vary from very poor to excellent, irregulars are usually lacking the higher-level organizational training and equipment that is part of regular army. This usually makes irregulars ineffective in direct, main-line combat, the typical focus of more standard armed forces. Other things being equal, major battles between regulars and irregulars heavily favor the regulars.
However, irregulars can excel at many other combat duties besides main-line combat, such as scouting, skirmishing, harassing, pursuing, rear-guard actions, cutting supply, sabotage, raids, ambushes and underground resistance. Experienced irregulars often surpass the regular army in these functions. By avoiding formal battles, irregulars have sometimes harassed high quality armies to destruction.
The total effect of irregulars is often underestimated. Since the military actions of irregulars are often small and unofficial, they are underreported or even overlooked. Even when engaged by regular armies, some military histories exclude all irregulars when counting friendly troops, but include irregulars in the count of enemy troops, making the odds seem much worse than they were. This may be accidental; counts of friendly troops often came from official regular army rolls that exclude unofficial forces, while enemy strength often came from visual estimates, where the distinction between regular and irregular were lost. If irregular forces overwhelm regulars, records of the defeat are often lost in the resulting chaos.
History: By definition, "irregular" is understood in contrast to "regular armies", which grew slowly from personal bodyguards or elite militia. In Ancient warfare, most civilized nations relied heavily on irregulars to augment their small regular army. Even in advanced civilizations, the irregulars commonly outnumbered the regular army.
Sometimes entire tribal armies of irregulars were brought in from internal native or neighboring cultures, especially ones that still had an active hunting tradition to provide the basic training of irregulars. The regulars would only provide the core military in the major battles; irregulars would provide all other combat duties.
Notable examples of regulars relying on irregulars include Bashi-bazouk units in the Ottoman Empire, auxiliary cohorts of Germanic peoples in the Roman Empire, Cossacks in the Russian Empire, and Native American forces in the American frontier of the Confederate States of America.
One could attribute the disastrous defeat of the Romans at the Battle of the Teutoburg Forest to the lack of supporting irregular forces; only a few squadrons of irregular light cavalry accompanied the invasion of Germany when normally the number of foederati and auxiliaries would equal the regular legions. During this campaign the majority of locally recruited irregulars defected to the Germanic tribesmen led by the former auxiliary officer Arminius.
During the decline of the Roman Empire, irregulars made up an ever-increasing proportion of the Roman military. At the end of the Western Empire, there was little difference between the Roman military and the barbarians across the borders.
Following Napoleon's modernisation of warfare with the invention of conscription, the Peninsular War led by Spaniards against the French invaders in 1808 provided the first modern example of guerrilla warfare. Indeed, the term of guerrilla itself was coined during this time.
As the Industrial Revolution dried up the traditional source of irregulars, nations were forced take over the duties of the irregulars using specially trained regular army units. Examples are the light infantry in the British Army.
Irregular regiments in British India: Prior to 1857 Britain's East India Company maintained large numbers of cavalry and infantry regiments officially designated as "irregulars", although they were permanently established units. The end of Muslim rule saw a large number of unemployed Indian Muslim horsemen, who were employed in the army of the EIC. British officers such as Skinner, Gardner and Hearsay had become leaders of irregular cavalry that preserved the traditions of Mughal cavalry, which had a political purpose because it absorbed pockets of cavalrymen who might otherwise become disaffected plunderers. These were less formally drilled and had fewer British officers (sometimes only three or four per regiment) than the "regular" sepoys in British service. This system enabled the Indian officers to achieve greater responsibility than their counterparts in regular regiments. Promotion for both Indian and British officers was for efficiency and energy, rather than by seniority as elsewhere in the EIC's armies. In irregular cavalry the Indian troopers provided their horses under the silladar system. The result was a loose collection of regiments which in general were more effective in the field than their regular counterparts. These irregular units were also cheaper to raise and maintain and as a result many survived into the new Indian Army that was organized following the great Indian Rebellion of 1857.
Irregular military in Canada before 1867: Before 1867, military units in Canada consisted of British units of volunteers.
During French rule, small local volunteer militia units or colonial militias were used to provide defence needs. During British control of various local militias, the Provincial Marine were used to support British regular forces in Canada.
Other instances of irregulars: Use of large irregular forces featured heavily in wars such as the Three Kingdoms period, the American Revolution, the Irish War of Independence and Irish Civil War, the Franco-Prussian War, the Russian Civil War, the Second Boer War, Liberation war of Bangladesh, Vietnam War, the Syrian Civil War and especially the Eastern Front of World War II where hundreds of thousands of partisans fought on both sides.
The Chinese People's Liberation Army began as a peasant guerilla force which in time transformed itself into a large regular force. This transformation was foreseen in the doctrine of "people's war", in which irregular forces were seen as being able to engage the enemy and to win the support of the populace but as being incapable of taking and holding ground against regular military forces.
Examples: Arbegnoch - Guerrilla force in occupied Ethiopia 1936-44.
Armatoloi - Ottoman Greek irregulars
Armenian fedayi – Armenian irregular units of the 1880s–1920s
Atholl Highlanders – The only legal and still existing private army in Europe under the command of the Duke of Atholl in Scotland, United Kingdom, (1777–1783 and since 1839)
Bands - (Italian Army colonial and foreign irregulars)
Bargi - Maratha horsemen 1741-51.
Bashi-bazouk – Irregular mounted mercenary in the Ottoman Empire
Border ruffian / Jayhawker
Bushwhackers – Irregular partisans who fought for the South during the American Civil War.
Cacos - Haitian insurgent groups 19th and 20th centuries.
Camisards – Huguenot insurgency in the beginning of the 18th century in the Cévennes
Cateran - Scottish clan warriors and marauders pre-18th century.
Çetes - Muslim irregulars Asia Minor 1910s-1920s
Cheta - armed bands resisting Ottoman rule in Macedonia, early 20th century.
Chetniks - nationalist movement and guerrilla force in occupied Yugoslavia 1941-44.
Croats (military unit) - 17th century frontier light cavalry in Habsburgh service.
Dubat - indigenous auxiliaries in Italian Somaliand.
Fano - Ethiopian guerrilla force
Fedayeen - Arabic term for fighters willing to sacrifice themselves
Fellagha - nationalist militants in Algeria and Tunisia opposing French colonial rule 1950s.
Filibuster (military) - participants in foreign military interventions without official backing. |
mil_tactics_continued_pretraining.csv | Irregular military | Camisards – Huguenot insurgency in the beginning of the 18th century in the Cévennes
Cateran - Scottish clan warriors and marauders pre-18th century.
Çetes - Muslim irregulars Asia Minor 1910s-1920s
Cheta - armed bands resisting Ottoman rule in Macedonia, early 20th century.
Chetniks - nationalist movement and guerrilla force in occupied Yugoslavia 1941-44.
Croats (military unit) - 17th century frontier light cavalry in Habsburgh service.
Dubat - indigenous auxiliaries in Italian Somaliand.
Fano - Ethiopian guerrilla force
Fedayeen - Arabic term for fighters willing to sacrifice themselves
Fellagha - nationalist militants in Algeria and Tunisia opposing French colonial rule 1950s.
Filibuster (military) - participants in foreign military interventions without official backing.
Free Corps (Freikorps) – volunteer units in German-speaking countries, that existed from the 18th to the early 20th centuries as private armies
Free Swarm (Freischar) – volunteers, that participated in a conflict without the formal authorisation of one of the belligerents, but on the instigation of a political party or an individual
Goumiers – originally tribal allies supporting France in Algeria during the 19th century. From 1912 to 1956 Moroccan auxiliaries serving with the French Army.
Hajduks— bandits and irregulars in and against the Ottoman Empire, but found amongst military ranks in Hungary and the Polish–Lithuanian Commonwealth
Harkis – Algerian Muslim irregulars who served with the French Army during the Algerian War of 1954–62.
Haydamak - pro-Cossack paramilitary (18th century)
Honghuzi – Manchurian bandits who served as irregulars during the Russo-Japanese War of 1904–1905.
Jagunço – armed hand in Northern Brazil.
Kachaks - Albanian bandits and rebels (1880s–1930)
Klephts – Greek guerrilla fighters in Ottoman Greece
Komitadji – rebel bands operating in the Balkans during the final period of the Ottoman Empire.
Kuruc - Hungarian insurgent groups 17th-18th centuries.
Kuva-yi Milliye - Ottoman/Turkish militia 1918-1921
Land Storm (troops) (Landsturm) – created by a 21 April 1813 edict of Frederick William III of Prussia, lowest level of reserve troops in Prussia, Germany, Austria-Hungary, Sweden, Switzerland and the Netherlands
Legion of Frontiersmen – An irregular quasi-military organization that proliferated throughout the British Empire prior to World War I
Macheteros de Jara - Paraguayan cavalry regiment of the Chaco War
Republiquetas
Requeté
Makhnovshchina – Ukrainian anarchist army that fought both the White Armies and the Bolsheviks during the Russian Civil War.
Minutemen – American irregular troops during the American Revolution
Morlachs - Dalmatian auxiliaries in Venetian service during the 17th century.
People's Liberation Armed Forces of South Vietnam- Viet Cong's army
Pindari – 18th century irregular horsemen in India
Rapparee - Irish guerillas (1690s)
Righteous Army— militias organised at several dates in Korean history
Rough Riders – in the Spanish–American War
Ruga-Ruga - East African auxiliaries to German and British colonial armies.
Selbstschutz
Shifta – local militia in the Horn of Africa,
Trenck's Pandurs – Habsburg monarchy 17th and 18th century skirmisher, later evolving in the regular Grenz infantry.
Zapatistas - militant political movement active in southern Mexico from 1994.
Zeybeks - Ottoman irregulars (17th to 20th centuries)
Irregulars in today's warfare: Modern conflicts in post-invasion Iraq, the renewed Taliban insurgency in the 2001 war in Afghanistan, the Darfur conflict, the rebellion in the North of Uganda by the Lord's Resistance Army, and the Second Chechen War are fought almost entirely by irregular forces on one or both sides.
The CIA's Special Activities Center (SAC) is the premiere American paramilitary clandestine unit for creating or combating irregular military forces. SAD paramilitary officers created and led successful units from the Hmong tribe during the Laotian Civil War in the 1960s and 1970s. They also organized and led the Mujaheddin as an irregular force against the Soviet Union in Afghanistan in the 1980s, as well as the Northern Alliance as an irregular insurgency force against the Taliban with US Army Special Forces during the war in Afghanistan in 2001 and organized and led the Kurdish Peshmerga with US Army Special Forces as an irregular counter-insurgency force against the Kurdish Sunni Islamist group Ansar al-Islam at the Iraq-Iran border and as an irregular force against Saddam Hussein during the war in Iraq in 2003.
Irregular civilian volunteers also played a large role in the Battle of Kyiv during the 2022 Russian Invasion of Ukraine.
See also: Asymmetric warfare – Military theory that also includes regulars vs. irregulars
Fourth generation warfare
"Yank" Levy, teacher of the Home Guard and coauthor of the first practical book on Guerrilla Warfare
Low intensity conflict
Military volunteer
Unconventional warfare
Violent non-state actors
Sissi (Finnish light infantry)
Legal aspects, categories: Definition of terrorism
Enemy combatant, US term used during the "War on Terror"
Law of war
Martens Clause, stating that customary law applies where specific law is lacking in detail
Unlawful combatant
References:
Bibliography:
References:
Further reading: Beckett, I. F. W. (15 September 2009). Encyclopedia of Guerrilla Warfare (Hardcover). Santa Barbara, California: Abc-Clio Inc. ISBN 978-0874369298. |
mil_tactics_continued_pretraining.csv | Irregular warfare | Terminology:
Early usage: One of the earliest known uses of the term irregular warfare is in the 1986 English edition of "Modern Irregular Warfare in Defense Policy and as a Military Phenomenon" by former Nazi officer Friedrich August Freiherr von der Heydte. The original 1972 German edition of the book is titled "Der Moderne Kleinkrieg als Wehrpolitisches und Militarisches Phänomen". The German word "Kleinkrieg" is literally translated as "Small War." The word "Irregular," used in the title of the English translation of the book, seems to be a reference to non "regular armed forces" as per the Third Geneva Convention.
Another early use of the term is in a 1996 Central Intelligence Agency (CIA) document by Jeffrey B. White. Major military doctrine developments related to IW were done between 2004 and 2007 as a result of the September 11 attacks on the United States. A key proponent of IW within US Department of Defense (DoD) is Michael G. Vickers, a former paramilitary officer in the CIA. The CIA's Special Activities Center (SAC) is the premiere American paramilitary clandestine unit for creating and for combating irregular warfare units. For example, SAC paramilitary officers created and led successful irregular units from the Hmong tribe during the war in Laos in the 1960s, from the Northern Alliance against the Taliban during the war in Afghanistan in 2001, and from the Kurdish Peshmerga against Ansar al-Islam and the forces of Saddam Hussein during the war in Iraq in 2003.
Other definitions: IW is a form of warfare that has as its objective the credibility and/or legitimacy of the relevant political authority with the goal of undermining or supporting that authority. IW favors indirect approaches, though it may employ the full range of military and other capabilities to seek asymmetric approaches in order to erode an adversary's power, influence, and will.
IW is defined as a violent struggle among state and non-state actors for legitimacy and influence over the relevant population(s)
IW involves conflicts in which enemy combatants are not regular military forces of nation-states.
IW is "war among the people" as opposed to "industrial war" (i.e., regular war).
Examples: Nearly all modern wars include at least some element of irregular warfare. Since the time of Napoleon, approximately 80% of conflict has been irregular in nature.
However, the following conflicts may be considered to have exemplified by irregular warfare:
Afghan Civil War
Algerian War
American Indian Wars
American Revolutionary War
Arab Revolt
Chinese Civil War
Cuban Revolution
First Chechen War
First Sudanese Civil War
Iraq War
Kosovo War
Lebanese Civil War
Portuguese Colonial War
Rwanda Civil War
Second Boer War
Second Chechen War
Second Sudanese Civil War
Somali Civil War
Philippine-American War
The Troubles
Vietnam War
Libyan Civil War (2011)
Syrian Civil War
Iraqi Civil War (2014–2017)
Second Libyan Civil War
Yemeni Civil War (2015–present)
Activities: Activities and types of conflict included in IW are:
Asymmetric warfare
Civil-military operations (CMO)
Colonial war
Foreign internal defense (FID)
Guerrilla warfare (GW)
Insurgency/Counter-insurgency (COIN)
Law enforcement activities focused on countering irregular adversaries
Military Intelligence and counter-intelligence activities
Stabilization, Security, Transition, and Reconstruction Operations (SSTRO)
Terrorism/Counter-terrorism
Transnational criminal activities that support or sustain IW:
narco-trafficking
Illicit arms trafficking
illegal financial transactions
Unconventional warfare (UW)
According to the DoD, there are five core activities of IW:
Counter-insurgency (COIN)
Counter-terrorism (CT)
Unconventional warfare (UW)
Foreign internal defense (FID)
Stabilization Operations (SO)
Modeling and simulation: As a result of DoD Directive 3000.07, United States armed forces are studying irregular warfare concepts using modeling and simulation.
Wargames and exercises: There have been several military wargames and military exercises associated with IW, including:
Unified Action,
Unified Quest,
January 2010 Tri-Service Maritime Workshop,
Joint Irregular Warrior Series war games,
Expeditionary Warrior war game series, and
a December 2011 Naval War College Maritime Stability Operations Game focused specifically on stability operations in the maritime domain conducted by the Naval Service.
See also: Individuals:
Che Guevara
François Géré
John R. M. Taylor
T. E. Lawrence
Robert Rogers' 28 "Rules of Ranging"
Notes:
References:
External links:
Military Art and Science Major - Irregular Warfare Specialty Track [11] Archived 2016-03-03 at the Wayback Machine
Pincus, Walter, "Irregular Warfare, Both Future and Present," The Washington Post, 7 April 2008 [12]
Phillips, Joan T., Fairchild, Muir S.,"Irregular Warfare", Maxwell Air Force Base, March 2007 [13] Archived 2017-05-01 at the Wayback Machine
Gustafson, Michael, "Modern Irregular Warfare & Counterinsurgency", Swedish National Defence College, 2009 [14] Archived 2010-08-23 at the Wayback Machine
Coons, Kenneth C. Jr., Harned, Glenn M., "Irregular Warfare is Warfare", Joint Force Quarterly, National Defense University, 2009 [15] Archived 2009-01-09 at the Wayback Machine
Naval Postgraduate School (NPS) Center on Terrorism and Irregular Warfare (CTIW) [16]
United States Joint Forces Command (USJFCOM) Joint Irregular Warfare Center (JIWC) [17] Archived 2011-07-25 at the Wayback Machine
Armed Groups and Irregular Warfare; Adapting Professional Military Education, Richard H. Shultz, Jr., Roy Godson, and Querine Hanlon (Washington, DC: National Strategy Information Center, 2009). [18]
Tomkins, Paul, Irregular Warfare: Annotated Bibliography. Fort Bragg, NC: United States Army Special Operations Command, 2011. |
mil_tactics_continued_pretraining.csv | Islamic military jurisprudence | Development of rulings: The first military rulings were formulated during the first century after Muhammad established an Islamic state in Medina. These rulings evolved in accordance with the interpretations of the Qur'an (the Islamic Holy scriptures) and Hadith (the recorded traditions, actions (behaviors), sayings and consents of Muhammad). The key themes in these rulings were the justness of war (Harb), and the injunction to jihad. The rulings do not cover feuds and armed conflicts in general.
Jihad (Arabic for "struggle") was given a military dimension after the oppressive practices of the Meccan Quraish against Muslims. It was interpreted as the struggle in God's cause to be conducted by the Muslim community. Injunctions relating to jihad have been characterized as individual as well as collective duties of the Muslim community. Hence, the nature of attack is important in the interpretation—if the Muslim community as a whole is attacked jihad becomes incumbent on all Muslims. Jihad is differentiated further in respect to the requirements within Muslim-governed lands (Dar al-Islam) and non-Muslim lands, both friendly and hostile.
According to Shaheen Sardar Ali and Javaid Rehman, both professors of law, the Islamic military jurisprudence are in line with rules of modern international law. They point to the dual commitment of Organisation of Islamic Cooperation (OIC) member states (representing most of the Muslim world) to Islamic law and the United Nations Charter, as evidence of compatibility of both legal systems.
Ethics of warfare: Fighting is justified for legitimate self-defense, to aid other Muslims and after a violation in the terms of a treaty, but should be stopped if these circumstances cease to exist. War should be conducted in a disciplined way, to avoid injuring non-combatants, with the minimum necessary force, without anger and with humane treatment towards prisoners of war.
During his life, Muhammad gave various injunctions to his forces and adopted practices toward the conduct of war. The most important of these were summarized by Muhammad's companion and first Caliph, Abu Bakr, in the form of ten rules for the Muslim army:
O people! I charge you with ten rules; learn them well! Stop, O people, that I may give you ten rules for your guidance in the battlefield. Do not commit treachery or deviate from the right path. You must not mutilate dead bodies. Neither kill a child, nor a woman, nor an aged man. Bring no harm to the trees, nor burn them with fire, especially those which are fruitful. Slay not any of the enemy's flock, save for your food. You are likely to pass by people who have devoted their lives to monastic services; leave them alone.
According to Tabari, the ten bits of "advice" that Abu Bakr gave was during the Expedition of Usama bin Zayd. During the Battle of Siffin, the Caliph Ali stated that Islam does not permit Muslims to stop the supply of water to their enemy. In addition to the Rashidun Caliphs, hadiths attributed to Muhammad himself suggest that he stated the following regarding the Muslim conquest of Egypt that eventually took place after his death:
You are going to enter Egypt a land where qirat (money unit) is used. Be extremely good to them as they have with us close ties and marriage relationships. When you enter Egypt after my death, recruit many soldiers from among the Egyptians because they are the best soldiers on earth, as they and their wives are permanently on duty until the Day of Resurrection. Be good to the Copts of Egypt; you shall take them over, but they shall be your instrument and help. Be Righteous to God about the Copts.
These principles were upheld by 'Amr ibn al-'As during his conquest of Egypt. A Christian contemporary in the 7th century, John of Nikiû, stated the following regarding the conquest of Alexandria by 'Amr:
On the twentieth of Maskaram, Theodore and all his troops and officers set out and proceeded to the island of Cyprus, and abandoned the city of Alexandria. And thereupon 'Amr the chief of the Moslem made his entry without effort into the city of Alexandria. And the inhabitants received him with respect; for they were in great tribulation and affliction. And Abba Benjamin, the patriarch of the Egyptians, returned to the city of Alexandria in the thirteenth year after his flight from the Romans, and he went to the Churches, and inspected all of them. And every one said: 'This expulsion (of the Romans) and victory of the Moslem is due to the wickedness of the emperor Heraclius and his persecution of the Orthodox through the patriarch Cyrus. This was the cause of the ruin of the Romans and the subjugation of Egypt by the Moslem. And 'Amr became stronger every day in every field of his activity. And he exacted the taxes which had been determined upon, but he took none of the property of the Churches, and he committed no act of spoliation or plunder, and he preserved them throughout all his days.
The principles established by the early Caliphs were also honoured during the Crusades, as exemplified by Sultans such as Saladin and Al-Kamil. For example, after Al-Kamil defeated the Franks during the Crusades, Oliverus Scholasticus praised the Islamic laws of war, commenting on how Al-Kamil supplied the defeated Frankish army with food:
Who could doubt that such goodness, friendship and charity come from God? Men whose parents, sons and daughters, brothers and sisters, had died in agony at our hands, whose lands we took, whom we drove naked from their homes, revived us with their own food when we were dying of hunger and showered us with kindness even when we were in their power.
The early Islamic treatises on international law from the 9th century onwards covered the application of Islamic ethics, Islamic economic jurisprudence and Islamic military jurisprudence to international law, and were concerned with a number of modern international law topics, including the law of treaties; the treatment of diplomats, hostages, refugees and prisoners of war; the right of asylum; conduct on the battlefield; protection of women, children and non-combatant civilians; contracts across the lines of battle; the use of poisonous weapons; and devastation of enemy territory.
Criteria for soldiering: Muslim jurists agree that Muslim armed forces must consist of debt-free adults who possess a sound mind and body. In addition, the combatants must not be conscripted, but rather enlist of their free will, and with the permission of their family.
Legitimacy of war: Muslims have struggled to differentiate between legitimate and illegitimate wars. Fighting in self-defense is not only legitimate but considered obligatory upon Muslims, according to the Qur'an. The Qur'an, however, says that should the enemy's hostile behavior cease, then the reason for engaging the enemy also lapses.
Defensive conflict: According to the majority of jurists, the Qur'anic casus belli (justification of war) are restricted to aggression against Muslims and fitna—persecution of Muslims because of their religious belief. They hold that unbelief in itself is not the justification for war. These jurists therefore maintain that only combatants are to be fought; noncombatants such as women, children, clergy, the aged, the insane, farmers, serfs, the blind, and so on are not to be killed in war. Thus, the Hanafī Ibn Najīm states: "the reason for jihād in our [the Hanafīs] view is kawnuhum harbā ‛alaynā [literally, their being at war against us]." The Hanafī jurists al-Shaybānī and al-Sarakhsī state that "although kufr [unbelief in God] is one of the greatest sins, it is between the individual and his God the Almighty and the punishment for this sin is to be postponed to the dār al-jazā’, (the abode of reckoning, the Hereafter)." War, according to the Hanafis, can't simply be made on the account of a nation's religion. Abdulaziz Sachedina argues that the original jihad according to his version of Shi'ism was permission to fight back against those who broke their pledges. Thus the Qur'an justified defensive jihad by allowing Muslims to fight back against hostile and dangerous forces.
Offensive conflict: Muhammad ibn Idris ash-Shafi`i (d. 820), founder of the Shafi'i school of thought, was the first to permit offensive jihad, limiting this warfare against pagan Arabs only, not permitting it against non-Arab non-Muslims. This view of al-Shafi'i is mitigated by the fact that an opposite view, in agreement with the majority, is also attributed to al-Shafi'i.
According to Abdulaziz Sachedina, offensive jihad raises questions about whether jihad is justifiable on moral grounds. He states that the Qur'an requires Muslims to establish just public order, increasing the influence of Islam, allowing public Islamic worship, through offensive measures. To this end, the Qur'anic verses revealed required Muslims to wage jihad against unbelievers who persecuted them. This has been complicated by the early Muslim conquests, which he argues were although considered jihad by Sunni scholars, but under close scrutiny can be determined to be political. Moreover, the offensive jihad points more to the complex relationship with the "People of the book". |
mil_tactics_continued_pretraining.csv | Islamic military jurisprudence | This view of al-Shafi'i is mitigated by the fact that an opposite view, in agreement with the majority, is also attributed to al-Shafi'i.
According to Abdulaziz Sachedina, offensive jihad raises questions about whether jihad is justifiable on moral grounds. He states that the Qur'an requires Muslims to establish just public order, increasing the influence of Islam, allowing public Islamic worship, through offensive measures. To this end, the Qur'anic verses revealed required Muslims to wage jihad against unbelievers who persecuted them. This has been complicated by the early Muslim conquests, which he argues were although considered jihad by Sunni scholars, but under close scrutiny can be determined to be political. Moreover, the offensive jihad points more to the complex relationship with the "People of the book".
Some major modern scholars who have rejected the idea of "offensive jihad" include the founder of the Muslim Brotherhood, Hasan al-Banna (1906–1949), the Al-Azhar scholar Muhammad Abu Zahra (1898–1974) who thought that "military jihad is permitted only to remove aggression ('udwân) and religious persecution (fitnah) against Muslims", as well as Syrian scholars Mohamed Said Ramadan Al-Bouti (1929–2013) and Wahbah al-Zuhayli (1932-2015), the latter saying that "peace is the underlying principle of relations between Muslims and non-Muslims. al-Zuhayli maintains that this view is supported by 8:61, as well as 2:208 and 4:94 that establish the principle of international peace. For him, Muslims should be committed to peace and security (on the basis of 4:90 and 60:8)."
International conflict: International conflicts are armed strifes conducted by one state against another, and are distinguished from civil wars or armed strife within a state. Some classical Islamic scholars, like the Shafi'i, classified territories into broad categories: dar al-islam ("abode of Islam"), dar al-harb ("abode of war), dar al-ahd ("abode of treaty"), and dar al-sulh ("abode of reconciliation"). Such categorizations of states, according to Asma Afsaruddin, are not mentioned in the Qur'an and Islamic tradition.
Declaration of war: The Qur'an commands Muslims to make a proper declaration of war prior to the commencement of military operations. Thus, surprise attacks prior to such a declaration are illegal under the Islamic jurisprudence. The Qur'an had similarly commanded Muhammad to give his enemies, who had violated the Treaty of Hudaybiyyah, a time period of four months to reconsider their position and negotiate. This rule, however, is not binding if the adversary has already started the war. Forcible prevention of religious practice is considered an act of war.
Conduct of armed forces: During battle the Qur'an commands Muslims to fight against the enemy. However, there are restrictions to such combat. Burning or drowning the enemy is allowed only if it is impossible to achieve victory by other means. The mutilation of dead bodies is prohibited. The Qur'an also discourages Muslim combatants from displaying pomp and unnecessary boasting when setting out for battle.
According to professor Sayyid Dāmād, no explicit injunctions against use of chemical or biological warfare were developed by medieval Islamic jurists as these threats were not existent then. However, Khalil al-Maliki's Book on jihad states that combatants are forbidden to employ weapons that cause unnecessary injury to the enemy, except under dire circumstances. The book, as an example, forbids the use of poisonous spears, since it inflicts unnecessary pain.
Civilian areas: According to all madhhabs, it is not permissible to kill women or children unless they are fighting against the Muslims. The Hanafi, Hanbali and Maliki schools forbid killing of those who are not able to fight, including monks, farmers, and serfs, as well as mentally and physically disabled.
Harming civilian areas and pillaging residential areas is also forbidden, as is the destruction of trees, crops, livestock and farmlands. The Muslim forces may not loot travelers, as doing so is contrary to the spirit of jihad. Nor do they have the right to use the local facilities of the native people without their consent. If such a consent is obtained, the Muslim army is still under the obligation to compensate the people financially for the use of such facilities. However, Islamic law allows the confiscation of military equipment and supplies captured from the camps and military headquarters of the combatant armies.
However, 14th century Fiqih Ibn Hudayl of Granada says:
It is permissible to set fire to the lands of the enemy, his stores of grain, his beasts of burden—if it is not possible for the Muslims to take possession of them—as well as to cut down his trees, to raze his cities, in a word, to do everything that might ruin and discourage him, provided that the imam deems these measures appropriate, suited to hastening the Islamization of that enemy or to weakening him. Indeed, all this contributes to a military triumph over him or to forcing him to capitulate.
Negotiations: Commentators of the Qur'an agree that Muslims should always be willing and ready to negotiate peace with the other party without any hesitation. According to Maududi, Islam does not permit Muslims to reject peace and continue bloodshed.
Islamic jurisprudence calls for third party interventions as another means of ending conflicts. Such interventions are to establish mediation between the two parties to achieve a just resolution of the dispute.
Ceasefire: In the context of seventh century Arabia, the Qur'an ordained Muslims must restrain themselves from fighting in the months when fighting was prohibited by Arab pagans. The Qur'an also required the respect of this cease-fire, prohibiting its violation.
If, however, non-Muslims commit acts of aggression, Muslims are free to retaliate, though in a manner that is equal to the original transgression. The "sword verse", which has attracted attention, is directed against a particular group who violate the terms of peace and commit aggression (but excepts those who observe the treaty). Patricia Crone states that this verse seems to be based on the same above-mentioned rules. Here also it is stressed that one must stop when they do. Ibn Kathir states that the verse implies a hasty mission of besieging and gathering intelligence about the enemy, resulting in either death or repentance by the enemy. It is read as a continuation of previous verses, it would be concerned with the same oath-breaking of "polytheists".
Prisoners of war: Men, women, and children may all be taken as prisoners of war under traditional interpretations of Islamic law. Generally, a prisoner of war could be, at the discretion of the military leader, executed, freed, ransomed, exchanged for Muslim prisoners, or kept as slaves. In earlier times, the ransom sometimes took an educational dimension, where a literate prisoner of war could secure his or her freedom by teaching ten Muslims to read and write. Some Muslim scholars hold that a prisoner may not be ransomed for gold or silver, but may be exchanged for Muslim prisoners. Women and children prisoners of war cannot be killed under any circumstances, regardless of their religious convictions, but they may be freed or ransomed. Women who are neither freed nor ransomed by their people were to be kept in bondage - also referred to as malakah.
Kitab al-Umm of Al-Shafi'i also recorded how Zubayr ibn al-Awwam and Anas ibn Malik convinced Umar to pardon Hormuzan, despite Umar earlier intent to execute the Persian general for the death of his two precious soldiers, Mujaz'ah ibn Thawr as-Sadusi and al-Bara' ibn Malik. Umar in the end agreed with Zubayr and Anas to spare Hormuzan as prisoner of war, and this historical rulings of Zubayr, Anas, and caliph Umar became the guideline for Shafiite scholars that prisoner of war in normal condition are not allowed to be harmed.
Permission to interrogate & torture: However, there are special condition regarding the allowance the conduct of using torture as method of interrogation,
Ibn Taymiyyah, Hanbalite scholar who has been praised as Mujaddid, has issued Fatwa that using torture on certain case for exceptionally dangerous criminal or enemy of the state were allowed, which based on the conduct of Zubayr ibn al-Awwam, when he tortured the Jewish chieftain Kenana ibn al-Rabi in the aftermath of the conquest of Khaybar fortresses, as Kenana was hiding the war spoils in Khaibar and refused to tell it. Abd al-Aziz Bin Baz, late 19th AD Grand Mufti of Saudi also supported Ibn Taymiyyah fatwa and issued his own fatwa with similar ruling on the basis Zubayr conduct of interrogating Kenana. Ibn Baz highlighted Zubayr conduct were acknowledged and permitted by Muhammad, as Kenana was one of Jewish conspirator in Khaybar. |
mil_tactics_continued_pretraining.csv | Islamic military jurisprudence | Abd al-Aziz Bin Baz, late 19th AD Grand Mufti of Saudi also supported Ibn Taymiyyah fatwa and issued his own fatwa with similar ruling on the basis Zubayr conduct of interrogating Kenana. Ibn Baz highlighted Zubayr conduct were acknowledged and permitted by Muhammad, as Kenana was one of Jewish conspirator in Khaybar. This criminal interrogation procedure exacted by Zubayr towards Kinana were also highlighted by other prominent scholars, such as Ahmad ibn Muhammad al-Thalabi in his work, Tafsir al-Tha'labi
Shafiʽi school madhhab highlighted another case that were used in Ijma (consensus among scholars) to permit the interrogations towards enemy of the state were including the case when Ali ibn Abi Talib and Zubayr once threaten a polytheist informant spy who are being caught by the 2 Sahabah during the spy journey to inform Mecca about Muslims secret military operation.
This ruling of torturing testified and accepted by Islamic researcher as particular affirmative proposition in certain case against war criminal, which modern time Islamic jurisprudence law theorists agreed on by viewing the measure as the necessity of law upholding, rather than degradation of the rights of the prisoner as human.
Internal conflict: Internal conflicts include "civil wars", launched against rebels, and "wars for welfare" launched against bandits.
During their first civil war, Muslims fought at the Battle of Bassorah. In this engagement, Ali (the caliph), set the precedent for war against other Muslims, which most later Muslims have accepted. According to Ali's rules, wounded or captured enemies should not be killed, those throwing away their arms should not be fought, and those fleeing from the battleground should not be pursued. Only captured weapons and animals (horses and camels which have been used in the war) are to be considered war booty. No war prisoners, women or children are to be enslaved and the property of the slain enemies are to go to their legal Muslim heirs.
Different views regarding armed rebellion have prevailed in the Muslim world at different times. During the first three centuries of Muslim history, jurists held that a political rebel may not be executed nor his/her property confiscated.
Classical jurists, however, laid down severe penalties for rebels who use "stealth attacks" and "spread terror". In this category, Muslim jurists included abductions, poisoning of water wells, arson, attacks against wayfarers and travellers, assaults under the cover of night and rape. The punishment for such crimes were severe, including death, regardless of the political convictions and religion of the perpetrator.
Some modern commentators have argued that the classical precedent of harsh punishments for rebels engaging in attacks that harmed civilian populations can be taken as evidence that the religious justifications used by Islamist groups such as al Qaeda and ISIL are in fact, not grounded in the Islamic tradition.
See also: Islam and war
Geneva Conventions
Hague conventions
Rule of Law in Armed Conflicts Project (RULAC)
Itmaam-i-hujjat
Laws of war
Opinion of Islamic scholars on Jihad
Islamic Military Counter Terrorism Coalition
Notes:
References: Aboul-Enein, H. Yousuf; Zuhur, Sherifa, "Islamic Rulings on Warfare", Strategic Studies Institute, US Army War College, Diane Publishing Co., Darby PA, ISBN 1-4289-1039-5
Abu-Nimer, Mohammed (2000–2001). "A Framework for Nonviolence and Peacebuilding in Islam". Journal of Law and Religion 15 (1/2). Retrieved on 2007-08-05.
Ali, Abdullah Yusuf (1991). The Holy Quran. Medina: King Fahd Holy Qur-an Printing Complex.
Charles, Robert H. (2007) [1916]. The Chronicle of John, Bishop of Nikiu: Translated from Zotenberg's Ethiopic Text. Merchantville, NJ: Evolution Publishing. ISBN 9781889758879.
Dāmād, Sayyid Mustafa Muhaqqiq et al. (2003). Islamic views on Human Rights. Tehran: Center for Cultural-International Studies.
Crone, Patricia (2004). God's Rule: Government and Islam. New York: Columbia University Press.
Javed Ahmad Ghamidi, Mizan (2001). The Islamic Law of Jihad, Dar ul-Ishraq. OCLC 52901690
Nicola Melis, Trattato sulla guerra. Il Kitāb al-ğihād di Molla Hüsrev, Aipsa, Cagliari 2002.
Madelung, Wilferd (1997). The Succession to Muhammad: A Study of the Early Caliphate. Cambridge University Press. ISBN 0-521-64696-0.
Maududi, Sayyid Abul Ala (1967). The Meaning of the Quran. Lahore: Islamic publications.
Maududi, Sayyid Abul Ala (1998). Human Rights in Islam. Islamabad: Da'wah Academy.
M. Mukarram Ahmed, Muzaffar Husain Syed, ed. (2005). "Encyclopaedia of Islam: Introduction to Islam". Encyclopaedia of Islam. Anmol Publications PVT. LTD. ISBN 81-261-2339-7.
Further reading: Khadduri, Majid (1955). War and Peace in the Law of Islam. Johns Hopkins Press. ISBN 1-58477-695-1.
Hashmi, Sohail H., ed. (2002). Islamic Political Ethics: Civil Society, Pluralism, and Conflict. Princeton University Press. ISBN 0-691-11310-6.
Malik, S. K. (1986). The Quranic Concept of War (PDF). Himalayan Books. ISBN 81-7002-020-4.
External links:
Jihad and the Islamic Law of War - RISSC |
mil_tactics_continued_pretraining.csv | Italo-Turkish War | Background: Italian claims to Libya date back to the Ottoman defeat by the Russian Empire during the War of 1877–1878 and subsequent disputes thereafter. At the Congress of Berlin in 1878, France and the United Kingdom had agreed to the French occupation of Tunisia and British control over Cyprus respectively, which were both parts of the declining Ottoman state.
When Italian diplomats hinted about possible opposition to the Anglo-French maneuvers by their government, the French replied that Tripoli would have been a counterpart for Italy, which made a secret agreement with the British government in February 1887 via a diplomatic exchange of notes. The agreement stipulated that Italy would support British control in Egypt, and that Britain would likewise support Italian influence in Libya. In 1902, Italy and France had signed a secret treaty which accorded freedom of intervention in Tripolitania and Morocco. The agreement, negotiated by Italian Foreign Minister Giulio Prinetti and French Ambassador Camille Barrère, ended the historic rivalry between both nations for control of North Africa. The same year, the British government promised Italy that "any alteration in the status of Libya would be in conformity with Italian interests". Those measures were intended to loosen Italian commitment to the Triple Alliance and thereby weaken Germany, which France and Britain viewed as their main rival in Europe.
Following the Anglo-Russian Convention and the establishment of the Triple Entente, Tsar Nicholas II and King Victor Emmanuel III made the 1909 Racconigi Bargain in which Russia acknowledged Italy's interest in Tripoli and Cyrenaica in return for Italian support for Russian control of the Bosphorus. However, the Italian government did little to realise that opportunity and so knowledge of the Libyan territory and resources remained scarce in the following years.
The removal of diplomatic obstacles coincided with increasing colonial fervor. In 1908, the Italian Colonial Office was upgraded to a Central Directorate of Colonial Affairs. The nationalist Enrico Corradini led the public call for action in Libya and, joined by the nationalist newspaper L'Idea Nazionale in 1911, demanded an invasion. The Italian press began a large-scale lobbying campaign for an invasion of Libya in late March 1911. It was fancifully depicted as rich in minerals and well-watered, defended by only 4,000 Ottoman troops. Also, its population was described as hostile to the Ottomans and friendly to the Italians, and they predicted that the future invasion would be little more than a "military walk".
The Italian government remained committed into 1911 to the maintenance of the Ottoman Empire, which was a close friend of its German ally. Prime Minister Giovanni Giolitti rejected nationalist calls for conflict over Ottoman Albania, which was seen as a possible colonial project, as late as the summer of 1911.
However, the Agadir Crisis in which French military action in Morocco in April 1911 would lead to the establishment of a French protectorate, changed the political calculations. The Italian leadership then decided that it could safely accede to public demands for a colonial project. The Triple Entente powers were highly supportive. British Foreign Secretary Edward Grey stated to the Italian ambassador on 28 July that he would support Italy, not the Ottomans. On 19 September, Grey instructed Permanent Under-Secretary of State Sir Arthur Nicolson, 1st Baron Carnock that Britain and France should not interfere with Italy's designs on Libya. Meanwhile, the Russian government urged Italy to act in a "prompt and resolute manner".
In contrast to its engagement with the Entente powers, Italy largely ignored its military allies in the Triple Alliance. Giolitti and Foreign Minister Antonino Paternò Castello agreed on 14 September to launch a military campaign "before the Austrian and German governments [were aware] of it". Germany was then actively attempting to mediate between Rome and Constantinople, and Austro-Hungarian Foreign Minister Alois Lexa von Aehrenthal repeatedly warned Italy that military action in Libya would threaten the integrity of the Ottoman Empire and create a crisis in the Eastern Question, which would destabilise the Balkan Peninsula and the European balance of power. Italy also foresaw that result since Paternò Castello, in a July report to the king and Giolitti, laid out the reasons for and against military action in Libya, and he raised the concern that the Balkan revolt, which would likely follow an Italian attack on Libya, might force Austria-Hungary to take military action in Balkan areas claimed by Italy.
The Italian Socialist Party had a strong influence over public opinion, but it was in opposition and also divided on the issue. It acted ineffectively against military intervention. The future Italian fascist leader Benito Mussolini, who was then still a left-wing Socialist, took a prominent antiwar position. A similar opposition was expressed in Parliament by Gaetano Salvemini and Leone Caetani.
An ultimatum was presented to the Ottoman government, led by the Committee of Union and Progress (CUP), on the night of 26–27 September 1911. Through Austro-Hungarian intermediation, the Ottomans replied with the proposal of transferring control of Libya without war and maintaining a formal Ottoman suzerainty. That suggestion was comparable to the situation in Egypt, which was under formal Ottoman suzerainty but was under de facto control by the British. Giolitti refused.
Italy declared war on 29 September 1911.
Military campaign:
Opening maneuver: The Italian army was ill-prepared for the war and was not informed of the government's plans for Libya until late September. The army had a shortage of soldiers as the class of 1889 was demobilized before the war started. Military operations started with the bombardment of Tripoli on 3 October. The city was conquered by 1,500 sailors, much to the enthusiasm of the interventionist minority in Italy. Another proposal for a diplomatic settlement was rejected by the Italians, and so the Ottomans decided to defend the province.
On 29 September 1911, Italy published the declaration of their direct interest towards Libya. Without a proper response, the Italian forces landed on the shores of Libya on 4 October 1911. A considerable number of Italians were living within the Ottoman Empire, mostly inhabiting Istanbul, Izmir, and Thessaloniki, dealing with trade and industry. The sudden declaration of war shocked both the Italian community living in the Empire as well as the Ottoman government. Depending on the mutual friendly relations, the Ottoman Government had sent their Libyan battalions to Yemen in order to suppress local rebellions, leaving only the military police in Libya.
Therefore, the Ottomans did not have a full army in Tripolitania. Many of the Ottoman officers had to travel there by their own means, often secretly, through Egypt since the British government would not allow Ottoman troops to be transported en masse through Egypt. The Ottoman Navy was too weak to transport troops by sea. The Ottomans organised local Libyans for the defence against the Italian invasion.
Between 1911 and 1912, over 1,000 Somalis from Mogadishu, the capital of Italian Somaliland, served as combat units along with Eritrean and Italian soldiers in the Italo-Turkish War. Most of the Somalian troops stationed would return home only in 1935, when they were transferred back to Italian Somaliland in preparation for the invasion of Ethiopia.
Italian troops landing in Libya: The first disembarkation of Italian troops occurred on 10 October. Having no prior military experiences and lacking adequate planning for amphibious invasions, the Italian armies poured onto the coasts of Libya, facing numerous problems during their landings and deployments. One of these problems was that the Ottoman vice admiral in 1911, Bucknam Pasha, was at first successfully blockading the Italians from landing on the Tripolitanian coast.
The Italians believed that a force of 20,000 would be able to take over Libya. The force was able to capture Tripoli, Tobruk, Derna, Bengasi, and Homs between 3 and 21 October. However, the Italians suffered a defeat at Shar al-Shatt, with at least 21 officers and 482 soldiers dead. The Italians executed 400 women and 4,000 men through firing squads and hanging in retaliation.
The corps was consequently enlarged to 100,000 men who had to face 20,000 Libyans and 8,000 Ottomans. The war turned into one of position. Even the Italian utilisation of armoured cars and air power, both among the earliest in modern warfare, had little effect on the initial outcome. In the first military use of heavier-than-air craft, Capitano Carlo Piazza flew the first reconnaissance flight on 23 October 1911. A week later, Sottotenente Giulio Gavotti dropped four grenades on Tajura (Arabic: تاجوراء Tājūrā’, or Tajoura) and Ain Zara in the first aerial bombing in history.
Trench phase: Technologically and numerically superior Italian forces easily managed to take the shores. However, the Italians still could not penetrate deep inland. The Libyans and Turks, estimated at 15,000, made frequent attacks day and night on the strongly-entrenched Italian garrison in the southern suburbs of Benghazi. The four Italian infantry regiments on the defensive were supported by the cruisers San Marco and Agordat. The Italians rarely attempted a sortie. |
mil_tactics_continued_pretraining.csv | Italo-Turkish War | In the first military use of heavier-than-air craft, Capitano Carlo Piazza flew the first reconnaissance flight on 23 October 1911. A week later, Sottotenente Giulio Gavotti dropped four grenades on Tajura (Arabic: تاجوراء Tājūrā’, or Tajoura) and Ain Zara in the first aerial bombing in history.
Trench phase: Technologically and numerically superior Italian forces easily managed to take the shores. However, the Italians still could not penetrate deep inland. The Libyans and Turks, estimated at 15,000, made frequent attacks day and night on the strongly-entrenched Italian garrison in the southern suburbs of Benghazi. The four Italian infantry regiments on the defensive were supported by the cruisers San Marco and Agordat. The Italians rarely attempted a sortie.
An attack of 20,000 Ottoman and local troops was repulsed on 30 November with considerable losses. Shortly afterward, the garrison was reinforced by the 57th infantry regiment from Italy. The battleship Regina Elena also arrived from Tobruk. During the night of 14 and 15 December, the Ottomans attacked in great force but were repulsed with aid of the fire from the ships. The Italians lost several field guns.
At Derna, the Ottomans and the Libyans were estimated at 3,500, but they were being constantly reinforced, and a general assault on the Italian position was expected. The Italian and Turkish forces in Tripoli and Cyrenaica were constantly reinforced since the Ottoman withdrawal to the interior enabled them to reinforce their troops considerably.
Lacking a considerable navy, the Ottomans were not able to send regular forces to Libya, the Ottoman government supported a great number of young officers to travel to the area in order to rally the locals and coordinate the resistance. Enver Bey, Mustafa Kemal Bey, Ali Fethi Bey, Cami Bey, Nuri Bey and many other Turkish officers managed to reach Libya, traveling under secret identities such as covering as a medical doctor, journalist among others. The Ottoman Şehzade Osman Fuad had also joined these officers, granting royal support to the resistance. During the war, Mustafa Kemal Bey, the future founder of the Republic of Turkey, was wounded by shrapnel to his eye. The cost of the war was defrayed chiefly by voluntary offerings from Muslims; men, weapons, ammunition and all kinds of other supplies were constantly sent across to the Egyptian and Tunisian frontiers, not withstanding their neutrality. The Italians occupied Sidi Barrani on the coast between Tobruk and Solum to prevent contraband and troops from entering across the Egyptian frontier, and the naval blockaders guarded the coast as well as capturing several sailing ships laden with contraband.
Italian troops landed at Tobruk after a brief bombardment on 4 December 1911, occupied the seashore, and marched towards the hinterlands facing weak resistance. Small numbers of Ottoman soldiers and Libyan volunteers were later organized by Captain Mustafa Kemal Atatürk. The small 22 December Battle of Tobruk resulted in Mustafa Kemal's victory. With that achievement, he was assigned to Derna War quarters to coordinate the field on 6 March 1912. The Libyan campaign ground to a stalemate by December 1911.
On 3 March 1912, 1,500 Libyan volunteers attacked Italian troops who were building trenches near Derna. The Italians, who were outnumbered but had superior weaponry, held the line. A lack of coordination between the Italian units sent from Derna as reinforcements and the intervention of Ottoman artillery threatened the Italian line, and the Libyans attempted to surround the Italian troops. Further Italian reinforcements, however, stabilised the situation, and the battle ended in the afternoon with an Italian victory.
On 14 September, the Italian command sent three columns of infantry to disband the Arab camp near Derna. The Italian troops occupied a plateau and interrupted Ottoman supply lines. Three days later, the Ottoman commander, Enver Bey, attacked the Italian positions on the plateau. The larger Italian fire drove back the Ottoman soldiers, who were surrounded by a battalion of Alpini and suffered heavy losses. A later Ottoman attack had the same outcome. Then, operations in Cyrenaica ceased until the end of the war.
Although some elements of the local population collaborated with the Italians, counterattacks by Ottomans soldiers with the help of local troops confined the Italian army to the coastal region. In fact, by the end of 1912 the Italians had made little progress in conquering Libya. The Italian soldiers were in effect besieged in seven enclaves on the coasts of Tripolitania and Cyrenaica. The largest was at Tripoli and extended barely 15 kilometers (9.3 miles) from the town.
Naval warfare: At sea, the Italians enjoyed a clear advantage. The Italian Navy had seven times the tonnage of the Ottoman Navy and was better trained.
In January 1912, the Italian cruiser Piemonte, with the Soldato class destroyers Artigliere and Garibaldino, sank seven Ottoman gunboats (Ayintab, Bafra, Gökcedag, Kastamonu, Muha, Ordu and Refahiye) and a yacht (Sipka) in the Battle of Kunfuda Bay. The Italians blockaded the Red Sea ports of the Ottomans and actively supplied and supported the Emirate of Asir, which was also then at war with the Ottoman Empire.
Then, on 24 February, in the Battle of Beirut, two Italian armoured cruisers attacked and sank an Ottoman casemate corvette and six lighters, retreated and returned and then sank an Ottoman torpedo boat. Avnillah alone suffered 58 killed and 108 wounded. By contrast, the Italian ships took no casualties and also no direct hits from any of the Ottoman warships. Italy had feared that the Ottoman naval forces at Beirut could be used to threaten the approach to the Suez Canal. The Ottoman naval presence at Beirut was completely annihilated and casualties on the Ottoman side were heavy. The Italian Navy gained complete naval dominance of the southern Mediterranean for the rest of the war.
Although Italy could extend its control to almost all of the 2,000 km of the Libyan coast between April and early August 1912, its ground forces could not venture beyond the protection of the navy's guns and so were limited to a thin coastal strip. In the summer of 1912, Italy began operations against the Ottoman possessions in the Aegean Sea with the approval of the other powers, which were eager to end a war that was lasting much longer than expected. Italy occupied twelve islands in the sea, comprising the Ottoman province of Rhodes, which then became known as the Dodecanese, but that raised the discontent of Austria-Hungary, which feared that it could fuel the irredentism of nations such as Serbia and Greece and cause imbalance in the already-fragile situation in the Balkan area. The only other relevant military operation of the summer was an attack of five Italian torpedo boats in the Dardanelles on 18 July.
Irregular war and atrocities: With a decree of 5 November 1911, Italy declared its sovereignty over Libya. Although the Italians controlled the coast, many of their troops had been killed in battle and nearly 6,000 Ottoman soldiers remained to face an army of nearly 140,000 Italians. As a result, the Ottomans began using guerrilla tactics. Indeed, some "Young Turk" officers reached Libya and helped organize a guerrilla war with local mujahideen. Many local Libyans joined forces with the Ottomans because of their common faith against the "Christian invaders" and started bloody guerrilla warfare. Italian authorities adopted many repressive measures against the rebels, such as public hangings as retaliation for ambushes.
On 23 October 1911, over 500 Italian soldiers were slaughtered by Turkish troops at Sciara Sciatt, on the outskirts of Tripoli. This massacre occurred, at least in part, reportedly due to the rape and sexual assault of Libyan and Turkish women by the Italian troops. Nevertheless, as a consequence, on the next day the 1911 Tripoli massacre had Italian troops systematically murder thousands of civilians by moving through local homes and gardens one by one, including by setting fire to a mosque with 100 refugees inside. Although Italian authorities attempted to keep the news of the massacre from getting out, the incident soon became internationally known. The Italians started to show photographs of the massacred Italian soldiers at Sciara Sciat to justify their revenge.
Treaty of Ouchy: Italian diplomats decided to take advantage of the situation to obtain a favourable peace deal. On 18 October 1912, Italy and the Ottoman Empire signed a treaty in Ouchy in Lausanne called the First Treaty of Lausanne, which is often also called Treaty of Ouchy to distinguish it from the 1923 Treaty of Lausanne, (the Second Treaty of Lausanne).
The main provisions of the treaty were as follows:
The Ottomans would withdraw all military personnel from Trablus and Benghazi vilayets (Libya), but in return, Italy would return Rhodes and the other Aegean islands that it held to the Ottomans.
Trablus and Benghazi vilayets would have a special status and a naib (regent), and a kadi (judge) would represent the Caliph. |
mil_tactics_continued_pretraining.csv | Italo-Turkish War | Treaty of Ouchy: Italian diplomats decided to take advantage of the situation to obtain a favourable peace deal. On 18 October 1912, Italy and the Ottoman Empire signed a treaty in Ouchy in Lausanne called the First Treaty of Lausanne, which is often also called Treaty of Ouchy to distinguish it from the 1923 Treaty of Lausanne, (the Second Treaty of Lausanne).
The main provisions of the treaty were as follows:
The Ottomans would withdraw all military personnel from Trablus and Benghazi vilayets (Libya), but in return, Italy would return Rhodes and the other Aegean islands that it held to the Ottomans.
Trablus and Benghazi vilayets would have a special status and a naib (regent), and a kadi (judge) would represent the Caliph.
Before the appointment of the kadis and naibs, the Ottomans would consult the Italian government.
The Ottoman government would be responsible for the expenses of these kadis and naibs.
Subsequent events prevented the return of the Dodecanese to Turkey, however. The First Balkan War broke out shortly before the treaty had been signed. Turkey was in no position to reoccupy the islands while its main armies were engaged in a bitter struggle to preserve its remaining territories in the Balkans. To avoid a Greek invasion of the islands, it was implicitly agreed on that the Dodecanese would remain under neutral Italian administration until the conclusion of hostilities between the Greeks and the Ottomans, after which the islands would revert to Ottoman rule.
Turkey's continued involvement in the Balkan Wars, followed shortly by World War I (which found Turkey and Italy again on opposing sides), meant that the islands were never returned to the Ottoman Empire. Turkey gave up its claims on the islands in the Treaty of Lausanne, and the Dodecanese continued to be administered by Italy until 1947, when after the Italian defeat in World War II, the islands were ceded to Greece.
Aftermath: The invasion of Libya was a costly enterprise for Italy. Instead of the 30 million lire a month judged sufficient at its beginning, it reached a cost of 80 million a month for a much longer period than was originally estimated. The war cost Italy 1.3 billion lire, nearly a billion more than Giovanni Giolitti estimated before the war. This ruined ten years of fiscal prudence.
After the withdrawal of the Ottoman army the Italians could easily extend their occupation of the country, seizing East Tripolitania, Ghadames, the Djebel and Fezzan with Murzuk during 1913. The outbreak of the First World War with the necessity to bring back the troops to Italy, the proclamation of the Jihad by the Ottomans and the uprising of the Libyans in Tripolitania forced the Italians to abandon all occupied territory and to entrench themselves in Tripoli, Derna, and on the coast of Cyrenaica. The Italian control over much of the interior of Libya remained ineffective until the late 1920s when forces under the Generals Pietro Badoglio and Rodolfo Graziani waged bloody pacification campaigns. Resistance petered out only after the execution of the rebel leader Omar Mukhtar on 15 September 1931. The result of the Italian colonisation for the Libyan population was that by the mid-1930s it had been cut in half due to emigration, famine, and war casualties. The Libyan population in 1950 was at the same level as in 1911, approximately 1.5 million.
Europe, Balkans and First World War: In 1924, the Serbian diplomat Miroslav Spalajković could look back on the events that led to the First World War and its aftermath and state of the Italian attack, "all subsequent events are nothing more than the evolution of that first aggression." Unlike the British-controlled Egypt, the Ottoman Tripolitania vilayet, which made up modern-day Libya, was core territory of the Empire, like that of the Balkans. The coalition that had defended the Ottomans during the Crimean War (1853–1856), minimised Ottoman territorial losses at the Congress of Berlin (1878) and supported the Ottomans during the Bulgarian Crisis (1885–88) had largely disappeared. The reaction in the Balkans to the Italian declaration of war was immediate. The first draft by Serbia of a military treaty with Bulgaria against Turkey was written by November 1911, with a defensive treaty signed in March 1912 and an offensive treaty signed in May 1912 focused on military action against Ottoman-ruled Southeastern Europe. The series of bilateral treaties between Greece, Bulgaria, Serbia and Montenegro that created the Balkan League was completed in 1912, with the First Balkan War (1912–1913) beginning by a Montenegrin attack on 8 October 1912, ten days before the Treaty of Ouchy. The swift and nearly-complete victory of the Balkan League astonished contemporary observers. However, none of the victors were happy with the division of captured territory, which resulted in the Second Balkan War (1913) in which Serbia, Greece, the Ottomans, and Romania took almost all of the territory that Bulgaria had captured in the first war. In the wake of the enormous change in the regional balance of power, Russia switched its primary allegiance in the region from Bulgaria to Serbia and guaranteed Serbian autonomy from any outside military intervention. The assassination of Archduke Franz Ferdinand, the heir to the Austro-Hungarian throne, by a Serbian nationalist and the resulting Austro-Hungarian plan for military action against Serbia was a major precipitating event of the First World War (1914–1918)
The Italo-Turkish War illustrated to the French and British governments that Italy was more valuable to them inside the Triple Alliance than being formally allied with the Entente. In January 1912, the French diplomat Paul Cambon wrote to Raymond Poincaré that Italy was "more burdensome than useful as an ally. Against Austria, she harbours a latent hostility that nothing can disarm". The tensions within the Triple Alliance would eventually lead Italy to sign the 1915 Treaty of London, which had it abandon the Triple Alliance and join the Entente.
In Italy itself, massive funerals for fallen heroes brought the Catholic Church closer to the government from which it had long been alienated. There emerged a cult of patriotic sacrifice in which the colonial war was celebrated in an aggressive and imperialistic way. The ideology of "crusade" and "martyrdom" characterised the funerals. The result was to consolidate Catholic war culture among devout Italians, which was soon expanded to include Italian involvement in the Great War (1915–1918). That aggressive spirit was revived by the Fascists in the 1920s to strengthen their popular support.
The resistance in Libya was an important experience for the young officers of the Ottoman Army, such as Mustafa Kemal Bey, Enver Bey, Ali Fethi Bey, Cami Bey, Nuri Bey and many others. These young officers were to perform important military duties and accomplishments in the First World War, led the Turkish independency war and found the Republic of Turkey.
Fate of the Dodecanese Islands: Because of the First World War, the Dodecanese remained under Italian military occupation. According to the 1920 Treaty of Sèvres, which was never ratified, Italy was supposed to cede all of the islands except Rhodes to Greece in exchange for a vast Italian zone of influence in southwest Anatolia. However, the Greek defeat in the Greco–Turkish War and the foundation of modern Turkey created a new situation that made the enforcement of the terms of that treaty impossible. In Article 15 of the 1923 Treaty of Lausanne, which superseded the 1920 Treaty of Sèvres, Turkey formally recognised the Italian annexation of the Dodecanese. The population was largely Greek, and by treaty in 1947, the islands eventually became part of Greece. As the Dodecanese were part of Italy, the local population was not affected by the subsequent population exchange between Greece and Turkey, and a small community of Dodecanese Turks has remained to this day.
Literature: In his book Primo, the Turkish Child, the renowned Turkish author Ömer Seyfettin tells the fictional story of a boy living in the Ottoman city of Selânik (Salonica, today Thessaloniki), who has to choose his national identity between his Turkish father and Italian mother after the Italo-Turkish War of 1911–1912 and the Balkan Wars of 1912–1913 (Ömer Seyfettin, Primo Türk Çocuğu).
See also: Sciara Sciatt
Battles of Zanzur (1912)
Battle of Sidi Bilal
Commemorative Medal for the Italo-Turkish War 1911–1912
References:
Works cited: Geppert, Dominik; Mulligan, William; Rose, Andreas, eds. (2015). The Wars before the Great War. Cambridge University Press. ISBN 9781107478145.
Further reading:
External links: Media related to Italo-Turkish War at Wikimedia Commons
Antonio De Martino.Tripoli italiana Societa Libraria italiana (Library of Congress). |
mil_tactics_continued_pretraining.csv | Italo-Turkish War | See also: Sciara Sciatt
Battles of Zanzur (1912)
Battle of Sidi Bilal
Commemorative Medal for the Italo-Turkish War 1911–1912
References:
Works cited: Geppert, Dominik; Mulligan, William; Rose, Andreas, eds. (2015). The Wars before the Great War. Cambridge University Press. ISBN 9781107478145.
Further reading:
External links: Media related to Italo-Turkish War at Wikimedia Commons
Antonio De Martino.Tripoli italiana Societa Libraria italiana (Library of Congress). New York, 1911
Turco-Italian War at Turkey in the First World War website
Johnston, Alan (2011-05-10). "Libya 1911: How an Italian pilot began the air war era". BBC News Online. Retrieved 10 May 2011.
Map of Europe Archived 2015-03-16 at the Wayback Machine during Italo-Turkish War at omniatlas.com
V. I. Lenin, The End of the Italo-Turkish War, September 28, 1912. |
mil_tactics_continued_pretraining.csv | Jet engine | History: The principle of the jet engine is not new; however, the technical advances necessary to make the idea work did not come to fruition until the 20th century.
A rudimentary demonstration of jet power dates back to the aeolipile, a device described by Hero of Alexandria in 1st-century Egypt. This device directed steam power through two nozzles to cause a sphere to spin rapidly on its axis. It was seen as a curiosity. Meanwhile, practical applications of the turbine can be seen in the water wheel and the windmill.
Historians have further traced the theoretical origin of the principles of jet engines to traditional Chinese firework and rocket propulsion systems. Such devices' use for flight is documented in the story of Ottoman soldier Lagâri Hasan Çelebi, who reportedly achieved flight using a cone-shaped rocket in 1633.
The earliest attempts at airbreathing jet engines were hybrid designs in which an external power source first compressed air, which was then mixed with fuel and burned for jet thrust. The Italian Caproni Campini N.1, and the Japanese Tsu-11 engine intended to power Ohka kamikaze planes towards the end of World War II were unsuccessful.
Even before the start of World War II, engineers were beginning to realize that engines driving propellers were approaching limits due to issues related to propeller efficiency, which declined as blade tips approached the speed of sound. If aircraft performance were to increase beyond such a barrier, a different propulsion mechanism was necessary. This was the motivation behind the development of the gas turbine engine, the most common form of jet engine.
The key to a practical jet engine was the gas turbine, extracting power from the engine itself to drive the compressor. The gas turbine was not a new idea: the patent for a stationary turbine was granted to John Barber in England in 1791. The first gas turbine to successfully run self-sustaining was built in 1903 by Norwegian engineer Ægidius Elling. Such engines did not reach manufacture due to issues of safety, reliability, weight and, especially, sustained operation.
The first patent for using a gas turbine to power an aircraft was filed in 1921 by Maxime Guillaume. His engine was an axial-flow turbojet, but was never constructed, as it would have required considerable advances over the state of the art in compressors. Alan Arnold Griffith published An Aerodynamic Theory of Turbine Design in 1926 leading to experimental work at the RAE.
In 1928, RAF College Cranwell cadet Frank Whittle formally submitted his ideas for a turbojet to his superiors. In October 1929, he developed his ideas further. On 16 January 1930, in England, Whittle submitted his first patent (granted in 1932). The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A.Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle would later concentrate on the simpler centrifugal compressor only. Whittle was unable to interest the government in his invention, and development continued at a slow pace.
In Spain, pilot and engineer Virgilio Leret Ruiz was granted a patent for a jet engine design in March 1935. Republican president Manuel Azaña arranged for initial construction at the Hispano-Suiza aircraft factory in Madrid in 1936, but Leret was executed months later by Francoist Moroccan troops after unsuccessfully defending his seaplane base on the first days of the Spanish Civil War. His plans, hidden from Francoists, were secretly given to the British embassy in Madrid a few years later by his wife, Carlota O'Neill, upon her release from prison.
In 1935, Hans von Ohain started work on a similar design to Whittle's in Germany, both compressor and turbine being radial, on opposite sides of the same disc, initially unaware of Whittle's work. Von Ohain's first device was strictly experimental and could run only under external power, but he was able to demonstrate the basic concept. Ohain was then introduced to Ernst Heinkel, one of the larger aircraft industrialists of the day, who immediately saw the promise of the design. Heinkel had recently purchased the Hirth engine company, and Ohain and his master machinist Max Hahn were set up there as a new division of the Hirth company. They had their first HeS 1 centrifugal engine running by September 1937. Unlike Whittle's design, Ohain used hydrogen as fuel, supplied under external pressure. Their subsequent designs culminated in the gasoline-fuelled HeS 3 of 5 kN (1,100 lbf), which was fitted to Heinkel's simple and compact He 178 airframe and flown by Erich Warsitz in the early morning of August 27, 1939, from Rostock-Marienehe aerodrome, an impressively short time for development. The He 178 was the world's first jet plane. Heinkel applied for a US patent covering the Aircraft Power Plant by Hans Joachim Pabst von Ohain on May 31, 1939; patent number US2256198, with M Hahn referenced as inventor. Von Ohain's design, an axial-flow engine, as opposed to Whittle's centrifugal flow engine, was eventually adopted by most manufacturers by the 1950s.
Austrian Anselm Franz of Junkers' engine division (Junkers Motoren or "Jumo") introduced the axial-flow compressor in their jet engine. Jumo was assigned the next engine number in the RLM 109-0xx numbering sequence for gas turbine aircraft powerplants, "004", and the result was the Jumo 004 engine. After many lesser technical difficulties were solved, mass production of this engine started in 1944 as a powerplant for the world's first jet-fighter aircraft, the Messerschmitt Me 262 (and later the world's first jet-bomber aircraft, the Arado Ar 234). A variety of reasons conspired to delay the engine's availability, causing the fighter to arrive too late to improve Germany's position in World War II, however this was the first jet engine to be used in service.
Meanwhile, in Britain the Gloster E28/39 had its maiden flight on 15 May 1941 and the Gloster Meteor finally entered service with the RAF in July 1944. These were powered by turbojet engines from Power Jets Ltd., set up by Frank Whittle. The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor entered service within three months of each other in 1944; the Me 262 in April and the Gloster Meteor in July. The Meteor only saw around 15 aircraft enter World War II action, while up to 1400 Me 262 were produced, with 300 entering combat, delivering the first ground attacks and air combat victories of jet planes.
Following the end of the war the German jet aircraft and jet engines were extensively studied by the victorious allies and contributed to work on early Soviet and US jet fighters. The legacy of the axial-flow engine is seen in the fact that practically all jet engines on fixed-wing aircraft have had some inspiration from this design.
By the 1950s, the jet engine was almost universal in combat aircraft, with the exception of cargo, liaison and other specialty types. By this point, some of the British designs were already cleared for civilian use, and had appeared on early models like the de Havilland Comet and Avro Canada Jetliner. By the 1960s, all large civilian aircraft were also jet powered, leaving the piston engine in low-cost niche roles such as cargo flights.
The efficiency of turbojet engines was still rather worse than piston engines, but by the 1970s, with the advent of high-bypass turbofan jet engines (an innovation not foreseen by the early commentators such as Edgar Buckingham, at high speeds and high altitudes that seemed absurd to them), fuel efficiency was about the same as the best piston and propeller engines.
Uses: Jet engines power jet aircraft, cruise missiles and unmanned aerial vehicles. In the form of rocket engines they power model rocketry, spaceflight, and military missiles.
Jet engines have propelled high speed cars, particularly drag racers, with the all-time record held by a rocket car. A turbofan powered car, ThrustSSC, currently holds the land speed record.
Jet engine designs are frequently modified for non-aircraft applications, as industrial gas turbines or marine powerplants. These are used in electrical power generation, for powering water, natural gas, or oil pumps, and providing propulsion for ships and locomotives. Industrial gas turbines can create up to 50,000 shaft horsepower. Many of these engines are derived from older military turbojets such as the Pratt & Whitney J57 and J75 models. There is also a derivative of the P&W JT8D low-bypass turbofan that creates up to 35,000 horsepower (HP)
.
Jet engines are also sometimes developed into, or share certain components such as engine cores, with turboshaft and turboprop engines, which are forms of gas turbine engines that are typically used to power helicopters and some propeller-driven aircraft.
Types of jet engine: There are a large number of different types of jet engines, all of which achieve forward thrust from the principle of jet propulsion.
Airbreathing: Commonly aircraft are propelled by airbreathing jet engines. |