Monday, September 18, 2017

Inertial Measurement Units (IMUs)- Some key issues - IV

The future is hard to predict and the present can only be described partially because we are never fully informed about anything. With that caveat I can make the following guesses about where this is likely to proceed.

As seen in earlier posts, barring major advances in understanding of gravity, defining the accuracy of an IMU will remain a very challenging affair. The precision of an IMU is a relatively simpler affair and we are likely to see Atom optics based gyros used to "clock" the performance of other systems. This kind of bootstrapping will create a deeper understanding of the nature of the error in other systems.

As Atom Optics related technologies become better engineered, we will see a gradual shift in the mission critical side of IMU applications. On the commercial side we will likely see a growth in MEMS based applications. It is not entirely unlikely that these two branches may come to leverage off each other.

I feel we are likely to see the following happen too

1) Role of Sensor Fusion: Using sensors of different types to check on each other offers interesting avenues for reducing noise in measurements of gravity. Schemes involving magnetometers have already been demonstrated, but a number of other schemes may also be possible. Such schemes will improve the precision of any number of existing devices.

2) Clouds help reduce noise: In theory one could have the IMU transmit a signal to a cloud, process it on the cloud and then resend the filtered signal back to the guidance system. This would be too unwieldy to carry out in a military or strategic application, but it may be possible to use this approach with commercial devices.

3) Deep learning will help fish weak signals from noise: It may become possible to implement a deep learning to extract small signals out of the noisy data from a cheap IMU. In the event that a deep learning network is so trained, a version of this network could be deployed on an embedded system attached to the IMU. It is difficult to ascertain how "good" this could be in actual deployment but the idea is plausible.

When speaking about these issues in the context of an actual deployment (as opposed to a "hey check out my Github for my latest Python code" context) - we are looking at a lot of time, money and hours spent on developing high reliability code and hardware. Those problems easily add decades on to the simplest thing.

N.B.  In order to keep this simple I have left out two other sources of trouble in an IMU - offset and latency. The discussion of these topics is complicated for non-specialists and getting into that will not add anything to what I am attempting to do here.

Inertial Measurement Units (IMUs)- Some key issues - III

We have come a long way since the first mechanical spinning flywheel gyros were made at the beginning of the 19th century. There are three novel techniques available for sensing now

1) Coriolis effect based devices [a nice discussion here] -  These devices are now very popular in commercial cell phones. The most popular versions use Micro Electro-Mechanical Systems (MEMS), and the rotation and acceleration is sensed as a change in the capacitance of a microfabricated circuit. The demands put on fab side are significant although things are getting cheaper as the scale of deployment grows. As the entire device is made in a semiconductor fab, the overhead associated with creating noise control electronics is reduced, as the signal is basically electrical in nature, a number of existing design for low noise signal amplification can be leveraged to improve performance.  Though not terribly good in terms of precision these devices are cheap enough to be deployed on scale. It is possible to remove noise by using a magnetometer or other sensors, but its still pretty bad relative to existing peers. With commercial applications growing, a lot of people are working on ways to fuse the data from multiple sensors and use cloud based big data filtering tools to get intelligence from these devices, but that stuff is still IMHO in its infancy. It is fantastically easy to get hold of a piece of python code that hacks into one of these and gets data out. If you are looking for a place to start learning about these, I recommend playing with the Arduino backed versions of these. I did this with a high school student on the robotics team some years ago - and it was a fantastic pain in the rear but great as a learning tool.

2) Sagnac effect based devices [a good place to start] - These devices are popular in the aerospace side. This effect is used for Ring Laser Gyros and Fiber Optic Gyros (This effect is also used in Atom Optics based systems but that is discussed as a separate topic). One would naively think these systems are the most robust form of sensing possible but there are subtle issues that limit their capabilities and utility [see here]. The main limitation on these devices comes from the fact that to get a very high resolution, one needs a very large path length. Such a path length can only be achieved by incurring penalties in weight and size. The manufacture of these devices is non-trivial and would require significant investment.

3) Atom Optics [See paper by Mark Kasevich etal. in this link] - These devices were originally conceived as sensitive tests of gravitational physics in the atom optics boom years of the last century. These ideas lounged in unwieldy room sized setups in the basements of physics departments for several decades but sustained investment from the NI-24 program by the Navy and a passing interest from the IC enabled the construction of very robust variants of these devices*. Though initially considered too fragile to be used in real world applications, the quality of engineering has steadily improved and I think we may see these on real world aerospace platforms. These devices hold the promise of a significant reduction in noise over current systems and the general thought is that with some effort this will lead to a much better place in the long run. That said the manufacture of these devices is non-trivial, the demands made on associated instrumentation are significantly larger than mechanical devices.

If I were to rate these platforms qualitatively in the order of error (given that hard metrics on this are difficult to come by in context) - I would say that Coriolis systems have the worst noise issues, followed by RLGs and Fiber Optic devices. The Atom Optics systems have the best noise characteristics cited in public sources. I would not take any of these numbers too literally as they are not available for the kinds of application that have spiked public interest,  those numbers are a closely guarded secret for obvious reasons.

As a rule of thumb, if you have a lot of noise on a sensor - you have a large overhead in terms of associated algorithms (and related electronics and software) needed to clean up that mess. IMHO this really limits the ability to use commercial/off-the-shelf stuff in strategic or mission critical applications.

In the next post I will make a few remarks in passing about the way things might change in the future.  (cont'd in next post)

* one of the prime drivers of this effort was the retirement of highly qualified technicians that could make mechanical gyroscopes and gradiometers in the US. Faced with a forced technological regression the S&T guys in the USG gravitated towards the only hope they had at the time of rapid advances. Hence the interest in Atom Optics which had emerged as one of the major candidates for other high impact technologies like Quantum Computation.

Inertial Measurement Units (IMUs)- Some key issues - II

As I indicated in the previous post, the ability to make a "good" IMU is challenged by two basic issues

1) The ability to machine precise parts - such as perfect spheres.
2) The ability to correctly model the behavior of gravity along the IMU's trajectory

The absence of perfect machining creates avenues to add error to the measurement of gravity. This affects the precision of the IMU.

The inability to properly model the local behavior of gravity leads to misinformed notions of accuracy.

There can be a "sweet spot" where acceptable levels of imprecision and inaccuracy coexist in harmony. Under such circumstances, it may be possible to make an IMU that is "good enough" for a particular role. Typically short range ballistic missiles can get away with having "crappier" IMUs simply because they aren't going very far or very high or very fast. However as you get up in speed and altitude, IMUs become quite critical to success.

I suspect the North Koreans are in such a "sweet spot", but I fear they will not be able to stay there very long as their ambitions grow with each passing day.

Here are some ways in which to manage the error in the measurements

1) Comparing measurements on two or more IMUs - If we mount two IMUs on our rocket then we can examine how they differ in the estimates of the height they report. If one IMU is much more sensitive than the other (i.e. able to see differences in height of centimeters as opposed to meters), we could see if it reports a change of 100 units when the coarse IMU reports a chance of 1 m. This kind of thing is pretty common in other measurements. In the published literature you hear words like "Allan Variance" [see this link for more], this refers to way of comparing the performance of two sensors and getting some meaningful measurements of the ARW and drift. In practice, placing multiple IMUs (especially ones with combinations of fine and coarse measurements) on operational platforms is a major manufacturing burden.

2) Error modeling - Once measured there are ways in which one can model the ARW and drift errors in our IMU. Error analysis tools have evolved significantly over the decades. Some really amazing stuff is now available. Most of the methods some variation of "quaternion based filtering". "Quaternions" are a very mathematically compact way of representing the information typically obtained from an IMU. "Filtering" because you are removing noise from the IMU data. Here the community of signal analysts broadly splits into two groups - the DSP guys (who use deep understanding based ideas like Kalman filtering) and the Deep Learning guys (who use techniques like neural networks). It is not clear if either approach gives a clear advantage in terms of accuracy however both approaches require concurrent development of embedded computation systems. That adds large overheads to the manufacturing burden associated with IMUs. You are basically adding a dedicated fab line, firmware development and software validation & testing to the program cost here.

3) RF ranging and other external referencing -You can always use a simple RF signal to correct the accumulation of errors in the IMU. However for extremely long range trajectories, the RF signal run out of line of sight with your rocket. So you have to either use a satellite or do something quite complicated to "get your bearings". If you decide to use a satellite RF beacon, you need to build a really good way of keeping that satellite in a particular spot in space otherwise you can't range off it in any error free way. That part can get really entertaining given all the weird drag effects you have in earth orbit and those gravitational effects I alluded to earlier. Also you are now adding the cost of a satellite beacon program to the cost of your rocket guidance program. This quickly devolves in to a number of chicken-and-egg questions. A highly unpleasant situation but sometimes a way can be found. One of the my favorite ideas in this context is Stellar Navigation. A combined "Astro Inertial Navigation" system was used on the SR-71 and a stellar alignment system was used on Gravity Probe B. These are relatively simple to implement and very robust. The unfortunate side effect of external referencing is that it can be interfered with and that makes its less suitable for nuclear deterrence missions.

In my next post I will discuss some novel gravity sensing systems that are finding application in commercial IMUs and how things might play out for them in the future.

(cont'd in next post).

Inertial Measurement Units (IMUs)- Some key issues - I

The proliferation of Inertial Measurement Units (IMUs) has rightly caused people to become concerned about the likelihood of their misuse by rogue states. There are however physical constraints that limit certain kinds of misuse. I discuss some of the key limitations below.  A good reference to have handy for this is "Inventing Accuracy". If you have problems following what I am saying, please reply to this post in the comments below and I will get back to you asap.

For the purposes of this discussion, let us consider a simplified IMU which consists of a gyroscope and a gradiometer. The gyroscope ensures that the gradiometer is aligned with vertical direction. In our simple model, the gyroscope is a mechanical device- a spinning wheel (the kind you might find in an undergrad physics lab) and the gradiometer is a simple spring which is compressed/stretched by a test mass attached to it. Also let us assume that our IMU is non-ideal in predictable ways and that our IMU is attached to a rocket that behaves in a totally predictable way (these are both over simplifications that do not hold IRL).

In the ideal case, our gryoscope is spun up to a certain angular velocity about its vertical axis and since the entire assembly sits on a gimbal mount, it holds the spring and test weight of the gradiometer perfectly vertical.  The test mass experiences a gravitational field that pulls it downwards and this causes the spring in the gradiometer to extend. If we apply an acceleration to the IMU (as we might if we were to light the rocket engine under it), we see the extension change as the added acceleration also pulls on the test mass.

In the ideal world, our IMU works perfectly, as the rocket engine lights up we see added acceleration add to gravity and the extension increases. As the rocket rises into space, the acceleration due to gravity reduces. A computer attached to the IMU records the change in the extension with time and when the change in extension reaches a particular amount, the computer attributes this change to the rocket reaching a particular height above ground and shuts off the rocket engine. Everyone is happy.

That's not the way it works IRL.

Firstly our gyro experiences friction on its bearings. This leads to a torque that changes its angular momentum. The decline in angular momentum presents in two ways - firstly as a set of random angular deceleration events that cause the angle of the gyro to rattle around (this is called Angular Random Walk or ARW) and secondly as a slow reduction in its angular velocity that causes the angle of the gryo to shift in one direction (this is called "drift"). As the gradiometer is attached to gyro, shifts in the gyro angles propagate to the measurements of acceleration. The exact model of propagation is quite nontrivial but in this way the gradiometer picks up an ARW and Drift of its own.
Errors in the gradiometer reading (i.e. extension) translate into errors in the estimation of the height of the rocket above ground. A large error could significantly alter the trajectory of the rocket.

A mechanical gyro and gradiometer may sound very low tech, but they are based on technologies that are over a hundred years old. They are extremely reliable. If you can machine perfect spheres (turns out that is a lot harder than one might think it is) you can make very high precision and high "accuracy" IMUs. I use "accuracy" in quotes because it turns out that it is quite difficult to define the term in this peculiar context.

As we go up and out from earth, we experience gravitational contributions from poorly characterized terrestrial (such as the non spherical nature of earth) and extraterrestrial sources (the moon, nearby asteroids, tidal effects etc...). These effects make it hard to claim deep knowledge of the gravitational acceleration at various altitudes. This makes it difficult to define "accuracy" in the context of a gradiometer.

(cont'd in next post).


Thursday, September 14, 2017

Op MEDEA or why I watch documentaries!

I watched a very nice documentary last night. And it took my breath away.

For a long time now I have been looking at the scientific side of the Global Climate Change awareness campaign and I have wondered how so many senior people were stating things as facts. I was always astounded by the scale of the data the awareness campaign was bringing out into the public domain.

I was not alone in this, many other physicists had similar reservations about the awareness material. Most of us would say things like "it is an interesting model" or "would be nice to see the raw data" or "I wonder what couplings were used to model so and so effects".

A few examples of this


  1. An awareness campaign was launched to show how the arctic ice cap has changed over the last century. When I saw this I was stunned. I kept asking myself - "Wow!! how long have they been collecting this data?"
  2. There was a set of discussions about deep ocean currents which depend sensitively on temperature profiles inside the ocean. I saw those discussions and asked myself - "Gee, it is would be horribly expensive to take those measurements, Has NOAA or some planetary science lab been measuring that stuff for decades now?"
  3.  There was a YouTube video last year which spoke about patterns of high altitude jet streams and how those were changing. Specifically how a southern hemisphere jet stream was crossing the equator - which it was claimed was happening for the first time ever. And again I found myself asking - "how on earth do you know it has never happened before?"

Turns out "they" knew and "they" told the scientists.

Specifically - in the late 80s the CIA opened up its massive vault of MASINT and SATINT to 70 odd high ranking scientists.  The effort were pushed by then Sen. Al Gore who felt that these measurement held the key to finding signatures of global climate change.

As the CIA had been repeatedly photographing the polar icecap, the Navy had been measuring the thermocline, deep ocean currents, and it had hydrophones to detect sounds in the ocean, the Air Force had data on high altitude winds, the seismology people had geophones to catch nuclear tests and so on... they were able to put a database of unimaginable size before scientists who had never had access to such information.

The result was the first ever model of Global Climate Change. It appears that the scientists were able to create a way of capturing icecap melting, precipitation shifts in extreme weather events etc... This was way back in the early 90s.

When Al Gore became VP,  the scientists asked if he could open a door to the Russian IC and see if they were willing to be a part of this program to understand global climate change. Then DCIA Robert Gates supported the venture. The result was a one of a kind intelligence collaboration between the DST CIA and the GRU's physical intelligence branch.

Russian and American scientists worked together and an incredibly coherent picture of climate change effects was built. The model predicted among other things - a steady rise in flooding due to high water content cyclonic events, droughts and ensuing migrations in Africa, shifts in the weather patterns and "freak" events.

The exact model was far from settled but there was a lot of constructive debate and nations worked together. The data was secret but the analysis was public. Most ICs (like India) knew what was going on and the analysis was a major influence on national policymaking.

When Bush Jr came into power, the toxic masculinity of the GOP took over - it was all about "drill baby drill", a brutal decade of pointless wars followed. Putin seized power in Moscow and he was also in the pocket of RUs vast ONG lobby, so MEDEA was shut down.

After Obama came to power, he reinstated that effort and asked it to deliver a list of specific national security risks from Global Climate Change. This created a fountainhead of information - a kind of a socket that the national security policy machinery is permanently plugged into. As the Nat-Sec paper mill cranks out position papers (that vital commodity on which all real decisions are made), it uses information from this MEDEA inspired factual database.

In 2015 the group disbanded as its work was done.

The movie has taught me two things

1) Always trust my instincts - if something looks odd or unsupported - something is actually amiss and

2) There is a LOT MORE HARD DATA supporting climate change than even I had believed. Also the data has been COLLECTED BY COMPLETELY NEUTRAL and UNCONNECTED IC OBSERVERS in at least TWO SEPARATE COUNTRIES over SEVERAL DECADES before Climate Change became a fashionable media topic.  

That latter part is HUGE. WAY WAY BIGGER than what Trump thinks is the size of his "Hands".*

If you can - watch the documentary - it is worth the time.

* Given how intimately the Nat Sec paper mill and the MEDEA data sockets are connected, I suspect Trump and the GOP will not be able to use the position paper mechanism at all. This will gravely impair their ability to make sound national security policies (but that is a discussion for another post).

Tuesday, September 12, 2017

Was DPRK 6 a two-stage TN device?

As I indicated earlier I believe that for policy purposes one should treat DPRK's thermonuclear ambitions as a reality but at the technical level it is vital to go on asking questions.

At the present time, we only have seismic signatures of the DPRK 6 event. The RC data from air sampling has not been made public. Given the proliferation sensitivity of the isotope ratios, generally people don't make that information public.

So quite understandably there is a lot of back and forth about OS information that would conclusively point to DPRK6 being a two stage TN device.

A lot of people feel that DPRK could not have developed a device of this complexity so quickly. That sounds reasonable until you see that the DPRK timescale is quite comparable to others who pursued a very accelerated and aggressive development cycle.

Rough estimates of DPRK U-235 and Pu-239 stockpiles are available. There is a section in there about how DPRK might produce Tritium, but there are no numbers that we can take from that.  There is also a suggestion that irradiation of Lithium targets is possible in DPRK reactors, so the only thing that could limit DPRK's ability to make LiD (the fuel used in modern two stage TN devices) is the availability of Deuterium.

There does not appear to be any public discussion on where DPRK might get its hands on Deuterium. From public records DPRK does not appear to have any heavy water moderated reactors. There do not appear to be plants inside DPRK that produce heavy water. There is no public evidence of DPRK importing heavy water from any known sources.

If some public domain information should emerge that DPRK was able to successfully source D2O (Heavy Water) from somewhere outside the country - then one might be able to argue that there is significant public domain evidence that supports the notion that DPRK 6 was a two stage device.

Monday, September 11, 2017

The Dance of Mohini

I have long maintained that most of the ancient Hindu texts are meta-narratives that discuss common themes in proliferation and counter proliferation. Whether historically accurate or mere works of imagination, they contain ideas that are applicable in a variety of situations.

One particular story arc that sticks in my mind is the "Dance of Mohini". The original story has been discussed elsewhere in greater detail by numerous experts. For our purposes, I think we can reduce it to the following - the adversary is seduced into using their most potent weapon against themselves.

The attractive aspect of the "Dance of Mohini" is that it uses a subterfuge - a gesture of peace - as opposed to the traditional escalation framework. The subterfuge lulls the adversary into a false sense of security and that decline in security consciousness is used to bring the enemy to death's door.

In the story itself, the Demon King Bhasmasur is seduced into thinking that he is merely dancing with a beautiful woman who will soon be his bride. His longing and lust for the woman allows him to forget that his right hand possess the power of death. In the dance, the beautiful woman - Mohini raises her right hand over her head and King Bhasmasur does the same - turning himself into ashes.

Those of you who have followed discussions on the disreputable forum will recognize that this idea was proposed almost 15 years earlier in a different context. I can't say for sure if anyone listened to me back then but the idea has a je ne sais quoi about it.

Friday, September 08, 2017

Some observations on the aftermath of DPRK 6

I am not a Korea expert, so please consider this as just mere comments of an outside observer.

1) KJU has released photos of what appears to be a mock up of a two stage nuclear weapon. The mock up is most likely heavily influenced by open source information available on the W88 warhead. The current consensus on seismic signatures of DPRK 6 is that that the device likely achieved 160kT. Per KPA sources, the rough estimates of CEP of DPRK missiles are in the few miles range and they seem to think that 1 in 4 missiles will make it through the BMD screen.

2) Against that backdrop it is reasonable to conclude that KJU (and DPRK) is pursuing a path that will lead them to acquiring two stage (high fusion yield) devices. This path appears to be motivated on the technical side by concerns about poor CEP and  low OAR of their long range delivery platforms.

3) It appears that at some point in the past Pyongyang has expressed a desire for a deal (mimicking the outline of the IUCNA). It is difficult to know whether this is just a "blow-hot-blow-cold" ruse designed to deflect attention from KJU's true intentions on the matter or if this is a genuine interest among some group of people inside DPRK for a normalization of US-DPRK relations.

4) It is also difficult to gauge the extent to which KJU's behavior is driven by internal threats to his power. As KJU has not earned his place at the top of DPRK military force, he nominally holds the rank of "Wonsu" but he does not hold the rank of "Dae Wonsu" that his father and grand father held. While there is little doubt that the DPRK Armed Forces will follow his instructions, they may not do with the same enthusiasm that they followed his grandfather or father. This IMHO reflects a distance between KJU's actual stature and his desired stature that will likely perpetually cause him to feel insecure.

5) Given that backdrop, if KJU were to weaponize his nuclear devices, several uncomfortable questions arise about authorized use or the likelihood of these weapons falling into the wrong hands. This automatically opens a discussion on Permissive Action Links and related safety systems.

6) As there is a lot of trust deficit between DPRK and the US, and that handing an issue of this sensitivity to Donald Trump is discomforting, I wonder if there is something to be gained by conducting a simulation of a negotiation between KJU and the US. I imagine there are enough subject matter experts in the US, Japan and South Korea that one could simulate a decent KJU!

I am tempted to think that perhaps if one engages KJU in productive dialogue, then one might be able to probe the issue of PALS with him. In the event that the desire for normalization of relations with the US is genuine he may agree to locating the US supplied PALS on his warheads.

While there is a significant gap between KJU expressing an interest in US manufactured PALS on his weapons and actually having US teams on the ground that put these PALS on to his weapons, but this could serve an important trust building step.

Tuesday, September 05, 2017

DPRK 6 has most likely crossed the design capabilities of S2

Based on seismic estimates it appears that DPRK 6 has crossed the maximum stated capability of the S2 boosted fission design. Current estimates from seismology rest at about 100-300kT. There is some discussion about the contribution of the related tectonic event. One estimate puts that at around 34%. This would mean some 66% of the energy came from the device.

At the low end the estimates are grazing the highest values for S2 *test*. At the high end they are exceeding the proposed capabilities of the S2 *design*.

Unlike Pokhran, the Punggyi-ri site is a granite mountain, there are no pressing concerns on environmental issues. There does not appear to be a regional framework to deter DPRK from further testing. Donald Trump doesn't have what it takes to shut this down. Per ROK intel, preparations for DPRK 7 are complete.  If that implies that a seventh device has already been placed in the hole, then DPRK may be very far along the path to advanced design capabilities.  There are NO real caps on this.

In broader sense, the rapid march of DPRK from demonstrating basic fission capabilities to boosting and possibly even some modest fusion yields is a very peculiar and alarming case of vertical proliferation. What photos Kim Jong Un lets us see of his "warhead capable" physics packages clearly point to a great deal of open source studies by DPRK experts. While these photos are unlikely to be actual designs used by DPRK in its weapons, the photos point to a very large number of design studies that they have conducted on the matter. This is not unusual but the speed at which they are doing it is quite unsettling.

When the decision was taken to demonstrate a boosted design in 1998, the world was a very different place. India would have been the first nation after the P5 to demonstrate such an advanced design capability. Given India's economic situation at the time, it seemed prudent to restrict oneself to a design that was economically more ideal from a stockpile maintenance perspective. The OAR of various delivery systems was unknown at the time, so it also made sense not to overburden the system.

Even at that time, a number of people had argued against this. The results of the test itself were questioned by certain people. All those technical doubts became enmeshed in the politics of institutions and personalities and that added a certain lurid aspect to it. I welcomed the scientific debate as it was educational, but the ego clashes were distasteful.

Today while the leading lights of that group have passed on, the questions they raised linger in the minds of people. The doubts were so potent that they almost derailed the vital IndoUS Civil Nuclear Agreement discussions.

This is all archival, I am speaking of a simpler time when it was easier to understand what was necessary.

Today - not so much.  The writing is on the wall. It is best to acknowledge it as such.

 As Chappandaz put it - "The storm is coming."

Wednesday, August 23, 2017

The threat posed by online Nazis

At any given point in time there are always segments of the population that harbor extremely regressive or destructive thoughts. Extremist groups typically form when sufficient numbers of these people come together and form a loose association. With time the association strengthens and then over the timescale of a few years the group dissipates.

This dissipation is due to a variety of fissiparous tendencies that are typically at play in any social group. Things like ego clashes, unsustainable patterns of funding or group activity. A solid core of ideas (relevant in a certain enduring social or economic context) or a charismatic leader or a steady stream of conflict capital can help keep groups together for periods exceeding the aforementioned natural timescale.

As a historical example, consider the case of Sikh extremist groups in Punjab. This is a heavily studied system both in India and abroad. There were always Sikh extremist groups that held extreme views. However with the appearance of Jarnail Singh Bhindran, these groups coalesced into a loose alliance and eventually threatened public order in Punjab and the integrity of the Union of India. After the Indian Army's Op Bluestar killed J S Bhindran, many believed the worst was over. However the public sympathy, the HR violations during the Army led Op Woodrose and sustained support to the extremists from the Pakistani ISI kept the movement going for ten more years. During the entire period of the "Khalistan Insurgency" - the extremist groups presented numerous fissiparous tendencies. Groups routinely formed, broke up, changed loyalties, ratted each other out and even fought and killed each other. However the central dynamic of the insurgency proved to be an alignment of these groups that endured far longer than anyone would have anticipated from historical data. In most Indian literature on this movement, this endurance is attributed to a particular cell of the ISI playing a major role in structuring funding and armament supply channels to these groups. In non-Indian literature this endurance is attributed to the ground swell public anger caused by HR violations by Indian security forces. I feel both factors were at play.

Fast forward to today and in the age of the internet, there are numerous opportunities for disparate extremist groups to engage in communications and common agenda building. The stable of hyper-empowered individuals and influencers is quite large (as one can easily draw on a globalized base) and there is an immense opportunity to use AIs to fluff up the support base of the movement. Given the way online advertising works, it is very easy to raise money for sustained online operations. We find (analogously to the GoI of the 1980s) that groups are interacting in ways that cannot really be controlled or precisely anticipated.

So it is no surprise that we find Nazi groups abundant on the internet. As I had indicated earlier Nazis are an extremely dangerous entity - capable of masking their genocidal agendas in a variety of wrappers. Once they get someone to participate in their agenda, they own that person - even if the person themselves doesn't want to be owned. Dissociating oneself from the Nazi agenda can be extremely difficult. Just ask the Germans how hard that can be.

In the US today we are seeing disparate groups, anti-govt militias, Sov-Cits, "White Nationalists", Oath Keepers, 3 Percenters, toxic Christian groups, Men-who-hate-Women etc... all gathering under the umbrella of the Alt-Right. We are also able to discern a significant participation and engagement in these groups by RIS fronts (though most of this seems to be B Team/GRU led - if there is a A Team/Xdir involvement it is not obvious at this time). There is also support from GOP and Trump organization types both of which appear convinced that this coalition of groups is somehow key to their personal political survival. (Again parallels to the Punjab situation abound!).

While it is important to remember that there are differences between the various groups in the coalition, it is IMHO equally important to focus on the one thing that brings them together - contact over the internet. Without this contact - which permits sharing of schedules, money and agendas (threads that bind) - there would be far less unity in this group. Cutting this form of contact down is impossible - you can shut down websites or twitter accounts but as IS demonstrates each day - there will always be ways to dodge any barriers you put down.

It should be relatively easy to track the schedule alignments by simply following the public calls for rallies and the responses from various group members.

It is much more difficult to find shared sources of funding, as most of these will have layers of security built into it. If the channels lead back to RIS, then one will not find them without serious effort.

It is moderately difficult to identify the threads that bind. As far as one can see based on random sampling there are four strands that bind these groups quite strongly

1) shared anti-Muslim sentiments
2) shared anti-Media sentiments
3) shared views on Immigration
4) somewhat shared anti-Jewish sentiments (toxic Christianity)

It would be nice to see some statistical evaluations of these "ties that bind" to see a rank ordering in terms of strength and to determine who exactly controls the definition of these agendas. Usually an agenda is shared, but whoever speaks first is the leader. If one finds that Nazis are defining the agenda, then their influence will be far more pervasive that even I can imagine right now. There are a number of subject matter experts who have strong views on where exactly the balance of power between these groups lies. I respect their scholarship but the opinions of a few PhDs will not convince anyone especially when there is significant political pressure to reject those opinions. Only rigorous data based analysis will sway minds.

National security strategists in the Govt of India in the early 80s failed to grasp that merely focusing on the microscopic fissiparous tendencies of the Khalistani movements would not protect India from a deadly convergence of these agendas. An entirely separate set of tools were needed to cope with the factors that brought these groups together. And a completely separate set of tools was needed to stop infiltration of these groups into the law enforcement community.

The first set of tools yielded moderate success in about 5 years after the onset of the insurgency. The Government of India was able to dominate the situation at great cost but what little success was achieved owed a great deal to the work of people like Dir (R&AW) A K Verma who eventually brought Gen. Hamid Gul and the ISI to the negotiating table.

The second set of tools were only able to achieve desired results after about 10 years. Even at this time - a low level but detectable presence of Khalistani groups remains (so nothing has been completely eradicated). Most of this portion of the story was driven by aggressive police officers like DGP K P S Gill.

The timescale for the resolution of the conflict was set by how long it took for Govt. of India to recognize the problem at hand and apply the right tools.

It's complicated.