Friday, September 22, 2017

BTC in the Bull Trap

It appears BTC prices are headed for the "Bull Trap". This is a technical term used by people who study speculation or hype cycles [see this article].  A bubble phenomenon (in this case BTC valuation) has a well known shape.

If you pull up the BTC to USD valuation, you will see a similar shape appear every few years.

The trigger for speculative bubbles in the BTC market appear to be FBI seizures of BTC from places like Silkroad, or Hansa/Alphabay. These are illegal markets in proscribed materials which hold large quantities of BTC as escrow. The bulk of the high value transactions on these markets are narcotics related. There is at this time a sizable cash flow from the US to China (mainly purchasing Fentanyl) but every major narcotics player uses these markets.

When the FBI "takes down" a well known market place, it does so (IMHO) because the volume of trade there in illegal narcotics becomes sufficiently high that it starts affecting the traditional price control mechanisms that the DEA and USIC put into place. I think its all part of a larger plan, but obviously I was never read in to what that plan is.

Each "take down" is accompanied by seizure of several hundred thousand BTC. The BTC are usually returned to the owners (if they are not involved in illegal activities) or simply auctioned after a few months. In the immediate term however a major take down produces a sharp rise in demand for BTC, and that drives the price up. In this space speculators jump in and drive the hype cycle/bubble.

At the peaks of the hype cycle, the BTC exchanges come under pressure. The demand is so high that counterfeiting grows and the exchanges are supposed to stop that. If they don't they fall like 'Mt Gox' which creates an added level of scarcity and feeds the bubble.

I feel the recent Chinese move against BTC is driven by concerns about the security of the exchanges themselves. I suspect that the DPRK guys have been gaining off the BTC market in ways that the Chinese don't like very much (and the Chinese don't like anything they can't control anyway - really who does?).

As the exchanges shut down, the transactions displace to other places which can still host them, this loss of trade momentum can cause the price to drop. That price drop is where the Bulls get stuck. It is the panic among the Bulls that really fuels the price plunge. As the exchanges shift and new channels emerge, the trade momentum resumes and we head into the "normal". Beyond that lies the much more complex rationalization process.

We must recognize that just as US or Chinese interventions can cause changes in the price of BTC, other states can affect it too.

As things stand unless the RU can get money back out of Wall Street, the RU banking sector will collapse. There is a partial collapse already on the private side, but the contagion of bad debt will spread to the government side too. RU will not want the price of BTC to fall until they have extracted whatever price they want for their BTC.

DPRK also needs to carry out BTC based transactions. This is the fastest way to get what it wants without having to pay in hard cash. Thanks to BTC, there is no need to send ships filled with Burmese Heroin to Iran and then get "SuperBills" back from there and then use the "SuperBills" along the I-90 corridor in the US ... etc etc etc... all that pain and suffering goes away with BTC.  If KJU is going to keep conducting tests, he needs a less than cumbersome payment options for all the stuff he needs to keep going.

With lots of pressure on BTC from RU and DPRK, I think (despite US and CN moves) we are likely to see new exchanges emerge. Either the PRC government will cave to pressure from the Fentanyl sellers or from its Crypto currency miners (which I will bet you are linked to Chinese intelligence operators) or desperate parts of the RU private banking system will create new exchanges that pull the volume off the Chinese exchanges.

This is IMHO far far far far from over.

DPRK is planning a Juche Bird to remove all doubts about its capability. I think the impact of such a statement by KJU is profound and it will amplify pressures to reignite the BTC valuation.

The battle for control of the BTC price is actually the battle for control over the global economy.

Monday, September 18, 2017

Inertial Measurement Units (IMUs)- Some key issues - IV

The future is hard to predict and the present can only be described partially because we are never fully informed about anything. With that caveat I can make the following guesses about where this is likely to proceed.

As seen in earlier posts, barring major advances in understanding of gravity, defining the accuracy of an IMU will remain a very challenging affair. The precision of an IMU is a relatively simpler affair and we are likely to see Atom optics based gyros used to "clock" the performance of other systems. This kind of bootstrapping will create a deeper understanding of the nature of the error in other systems.

As Atom Optics related technologies become better engineered, we will see a gradual shift in the mission critical side of IMU applications. On the commercial side we will likely see a growth in MEMS based applications. It is not entirely unlikely that these two branches may come to leverage off each other.

I feel we are likely to see the following happen too

1) Role of Sensor Fusion: Using sensors of different types to check on each other offers interesting avenues for reducing noise in measurements of gravity. Schemes involving magnetometers have already been demonstrated, but a number of other schemes may also be possible. Such schemes will improve the precision of any number of existing devices.

2) Clouds help reduce noise: In theory one could have the IMU transmit a signal to a cloud, process it on the cloud and then resend the filtered signal back to the guidance system. This would be too unwieldy to carry out in a military or strategic application, but it may be possible to use this approach with commercial devices.

3) Deep learning will help fish weak signals from noise: It may become possible to implement a deep learning to extract small signals out of the noisy data from a cheap IMU. In the event that a deep learning network is so trained, a version of this network could be deployed on an embedded system attached to the IMU. It is difficult to ascertain how "good" this could be in actual deployment but the idea is plausible.

When speaking about these issues in the context of an actual deployment (as opposed to a "hey check out my Github for my latest Python code" context) - we are looking at a lot of money and hours spent on developing high reliability code and hardware. Those problems easily add decades on to the simplest thing.

N.B.  In order to keep this simple I have left out two other sources of trouble in an IMU - offset and latency. The discussion of these topics is complicated for non-specialists and getting into that will not add anything to what I am attempting to do here.

Inertial Measurement Units (IMUs)- Some key issues - III

We have come a long way since the first mechanical spinning flywheel gyros were made at the beginning of the 19th century. There are three novel techniques available for sensing now

1) Coriolis effect based devices [a nice discussion here] -  These devices are now very popular in commercial cell phones. The most popular versions use Micro Electro-Mechanical Systems (MEMS), and the rotation and acceleration is sensed as a change in the capacitance of a microfabricated circuit. The demands put on fab side are significant although things are getting cheaper as the scale of deployment grows. As the entire device is made in a semiconductor fab, the overhead associated with creating noise control electronics is reduced, as the signal is basically electrical in nature, a number of existing design for low noise signal amplification can be leveraged to improve performance.  Though not terribly good in terms of precision these devices are cheap enough to be deployed on scale. It is possible to remove noise by using a magnetometer or other sensors, but its still pretty bad relative to existing peers. With commercial applications growing, a lot of people are working on ways to fuse the data from multiple sensors and use cloud based big data filtering tools to get intelligence from these devices, but that stuff is still IMHO in its infancy. It is fantastically easy to get hold of a piece of python code that hacks into one of these and gets data out. If you are looking for a place to start learning about these, I recommend playing with the Arduino backed versions of these. I did this with a high school student on the robotics team some years ago - and it was a fantastic pain in the rear but great as a learning tool.

2) Sagnac effect based devices [a good place to start] - These devices are popular in the aerospace side. This effect is used for Ring Laser Gyros and Fiber Optic Gyros (This effect is also used in Atom Optics based systems but that is discussed as a separate topic). One would naively think these systems are the most robust form of sensing possible but there are subtle issues that limit their capabilities and utility [see here]. The main limitation on these devices comes from the fact that to get a very high resolution, one needs a very large path length. Such a path length can only be achieved by incurring penalties in weight and size. The manufacture of these devices is non-trivial and would require significant investment.

3) Atom Optics [See paper by Mark Kasevich etal. in this link] - These devices were originally conceived as sensitive tests of gravitational physics in the atom optics boom years of the last century. These ideas lounged in unwieldy room sized setups in the basements of physics departments for several decades but sustained investment from the NI-24 program by the Navy and a passing interest from the IC enabled the construction of very robust variants of these devices*. Though initially considered too fragile to be used in real world applications, the quality of engineering has steadily improved and I think we may see these on real world aerospace platforms. These devices hold the promise of a significant reduction in noise over current systems and the general thought is that with some effort this will lead to a much better place in the long run. That said the manufacture of these devices is non-trivial, the demands made on associated instrumentation are significantly larger than mechanical devices.

If I were to rate these platforms qualitatively in the order of error (given that hard metrics on this are difficult to come by in context) - I would say that Coriolis systems have the worst noise issues, followed by RLGs and Fiber Optic devices. The Atom Optics systems have the best noise characteristics cited in public sources. I would not take any of these numbers too literally as they are not available for the kinds of application that have spiked public interest,  those numbers are a closely guarded secret for obvious reasons.

As a rule of thumb, if you have a lot of noise on a sensor - you have a large overhead in terms of associated algorithms (and related electronics and software) needed to clean up that mess. IMHO this really limits the ability to use commercial/off-the-shelf stuff in strategic or mission critical applications.

In the next post I will make a few remarks in passing about the way things might change in the future.  (cont'd in next post)

* one of the prime drivers of this effort was the retirement of highly qualified technicians that could make mechanical gyroscopes and gradiometers in the US. Faced with a forced technological regression the S&T guys in the USG gravitated towards the only hope they had at the time of rapid advances. Hence the interest in Atom Optics which had emerged as one of the major candidates for other high impact technologies like Quantum Computation.

Inertial Measurement Units (IMUs)- Some key issues - II

As I indicated in the previous post, the ability to make a "good" IMU is challenged by two basic issues

1) The ability to machine precise parts - such as perfect spheres.
2) The ability to correctly model the behavior of gravity along the IMU's trajectory

The absence of perfect machining creates avenues to add error to the measurement of gravity. This affects the precision of the IMU.

The inability to properly model the local behavior of gravity leads to misinformed notions of accuracy.

There can be a "sweet spot" where acceptable levels of imprecision and inaccuracy coexist in harmony. Under such circumstances, it may be possible to make an IMU that is "good enough" for a particular role. Typically short range ballistic missiles can get away with having "crappier" IMUs simply because they aren't going very far or very high or very fast. However as you get up in speed and altitude, IMUs become quite critical to success.

I suspect the North Koreans are in such a "sweet spot", but I fear they will not be able to stay there very long as their ambitions grow with each passing day.

Here are some ways in which to manage the error in the measurements

1) Comparing measurements on two or more IMUs - If we mount two IMUs on our rocket then we can examine how they differ in the estimates of the height they report. If one IMU is much more sensitive than the other (i.e. able to see differences in height of centimeters as opposed to meters), we could see if it reports a change of 100 units when the coarse IMU reports a chance of 1 m. This kind of thing is pretty common in other measurements. In the published literature you hear words like "Allan Variance" [see this link for more], this refers to way of comparing the performance of two sensors and getting some meaningful measurements of the ARW and drift. In practice, placing multiple IMUs (especially ones with combinations of fine and coarse measurements) on operational platforms is a major manufacturing burden.

2) Error modeling - Once measured there are ways in which one can model the ARW and drift errors in our IMU. Error analysis tools have evolved significantly over the decades. Some really amazing stuff is now available. Most of the methods some variation of "quaternion based filtering". "Quaternions" are a very mathematically compact way of representing the information typically obtained from an IMU. "Filtering" because you are removing noise from the IMU data. Here the community of signal analysts broadly splits into two groups - the DSP guys (who use deep understanding based ideas like Kalman filtering) and the Deep Learning guys (who use techniques like neural networks). It is not clear if either approach gives a clear advantage in terms of accuracy however both approaches require concurrent development of embedded computation systems. That adds large overheads to the manufacturing burden associated with IMUs. You are basically adding a dedicated fab line, firmware development and software validation & testing to the program cost here.

3) RF ranging and other external referencing -You can always use a simple RF signal to correct the accumulation of errors in the IMU. However for extremely long range trajectories, the RF signals run out of line of sight with your rocket. So you have to either use a satellite or do something quite complicated to "get your bearings". If you decide to use a satellite RF beacon, you need to build a really good way of keeping that satellite in a particular spot in space otherwise you can't range off it in any error free way. That part can get really entertaining given all the weird drag effects you have in earth orbit and those gravitational effects I alluded to earlier. Also you are now adding the cost of a satellite beacon program to the cost of your rocket guidance program. This quickly devolves in to a number of chicken-and-egg questions. A highly unpleasant situation but sometimes a way can be found. One of the my favorite ideas in this context is Stellar Navigation. A combined "Astro Inertial Navigation" system was used on the SR-71 and a stellar alignment system was used on Gravity Probe B. These are relatively simple to implement and very robust. The unfortunate side effect of external referencing is that it can be interfered with and that makes its less suitable for nuclear deterrence missions.

In my next post I will discuss some novel gravity sensing systems that are finding application in commercial IMUs and how things might play out for them in the future.

(cont'd in next post).

Inertial Measurement Units (IMUs)- Some key issues - I

The proliferation of Inertial Measurement Units (IMUs) has rightly caused people to become concerned about the likelihood of their misuse by rogue states. There are however physical constraints that limit certain kinds of misuse. I discuss some of the key limitations below.  A good reference to have handy for this is "Inventing Accuracy". If you have problems following what I am saying, please reply to this post in the comments below and I will get back to you asap.

For the purposes of this discussion, let us consider a simplified IMU which consists of a gyroscope and a gradiometer. The gyroscope ensures that the gradiometer is aligned with vertical direction. In our simple model, the gyroscope is a mechanical device- a spinning wheel (the kind you might find in an undergrad physics lab) and the gradiometer is a simple spring which is compressed/stretched by a test mass attached to it. Also let us assume that our IMU is non-ideal in predictable ways and that our IMU is attached to a rocket that behaves in a totally predictable way (these are both over simplifications that do not hold IRL).

In the ideal case, our gryoscope is spun up to a certain angular velocity about its vertical axis and since the entire assembly sits on a gimbal mount, it holds the spring and test weight of the gradiometer perfectly vertical.  The test mass experiences a gravitational field that pulls it downwards and this causes the spring in the gradiometer to extend. If we apply an acceleration to the IMU (as we might if we were to light the rocket engine under it), we see the extension change as the added acceleration also pulls on the test mass.

In the ideal world, our IMU works perfectly, as the rocket engine lights up we see added acceleration add to gravity and the extension increases. As the rocket rises into space, the acceleration due to gravity reduces. A computer attached to the IMU records the change in the extension with time and when the change in extension reaches a particular amount, the computer attributes this change to the rocket reaching a particular height above ground and shuts off the rocket engine. Everyone is happy.

That's not the way it works IRL.

Firstly our gyro experiences friction on its bearings. This leads to a torque that changes its angular momentum. The decline in angular momentum presents in two ways - firstly as a set of random angular deceleration events that cause the angle of the gyro to rattle around (this is called Angular Random Walk or ARW) and secondly as a slow reduction in its angular velocity that causes the angle of the gryo to shift in one direction (this is called "drift"). As the gradiometer is attached to gyro, shifts in the gyro angles propagate to the measurements of acceleration. The exact model of propagation is quite nontrivial but in this way the gradiometer picks up an ARW and Drift of its own.
Errors in the gradiometer reading (i.e. extension) translate into errors in the estimation of the height of the rocket above ground. A large error could significantly alter the trajectory of the rocket.

A mechanical gyro and gradiometer may sound very low tech, but they are based on technologies that are over a hundred years old. They are extremely reliable. If you can machine perfect spheres (turns out that is a lot harder than one might think it is) you can make very high precision and high "accuracy" IMUs. I use "accuracy" in quotes because it turns out that it is quite difficult to define the term in this peculiar context.

As we go up and out from earth, we experience gravitational contributions from poorly characterized terrestrial (such as the non spherical nature of earth) and extraterrestrial sources (the moon, nearby asteroids, tidal effects etc...). These effects make it hard to claim deep knowledge of the gravitational acceleration at various altitudes. This makes it difficult to define "accuracy" in the context of a gradiometer.

(cont'd in next post).


Thursday, September 14, 2017

Op MEDEA or why I watch documentaries!

I watched a very nice documentary last night. And it took my breath away.

For a long time now I have been looking at the scientific side of the Global Climate Change awareness campaign and I have wondered how so many senior people were stating things as facts. I was always astounded by the scale of the data the awareness campaign was bringing out into the public domain.

I was not alone in this, many other physicists had similar reservations about the awareness material. Most of us would say things like "it is an interesting model" or "would be nice to see the raw data" or "I wonder what couplings were used to model so and so effects".

A few examples of this


  1. An awareness campaign was launched to show how the arctic ice cap has changed over the last century. When I saw this I was stunned. I kept asking myself - "Wow!! how long have they been collecting this data?"
  2. There was a set of discussions about deep ocean currents which depend sensitively on temperature profiles inside the ocean. I saw those discussions and asked myself - "Gee, it is would be horribly expensive to take those measurements, Has NOAA or some planetary science lab been measuring that stuff for decades now?"
  3.  There was a YouTube video last year which spoke about patterns of high altitude jet streams and how those were changing. Specifically how a southern hemisphere jet stream was crossing the equator - which it was claimed was happening for the first time ever. And again I found myself asking - "how on earth do you know it has never happened before?"

Turns out "they" knew and "they" told the scientists.

Specifically - in the late 80s the CIA opened up its massive vault of MASINT and SATINT to 70 odd high ranking scientists.  The effort were pushed by then Sen. Al Gore who felt that these measurement held the key to finding signatures of global climate change.

As the CIA had been repeatedly photographing the polar icecap, the Navy had been measuring the thermocline, deep ocean currents, and it had hydrophones to detect sounds in the ocean, the Air Force had data on high altitude winds, the seismology people had geophones to catch nuclear tests and so on... they were able to put a database of unimaginable size before scientists who had never had access to such information.

The result was the first ever model of Global Climate Change. It appears that the scientists were able to create a way of capturing icecap melting, precipitation shifts in extreme weather events etc... This was way back in the early 90s.

When Al Gore became VP,  the scientists asked if he could open a door to the Russian IC and see if they were willing to be a part of this program to understand global climate change. Then DCIA Robert Gates supported the venture. The result was a one of a kind intelligence collaboration between the DST CIA and the GRU's physical intelligence branch.

Russian and American scientists worked together and an incredibly coherent picture of climate change effects was built. The model predicted among other things - a steady rise in flooding due to high water content cyclonic events, droughts and ensuing migrations in Africa, shifts in the weather patterns and "freak" events.

The exact model was far from settled but there was a lot of constructive debate and nations worked together. The data was secret but the analysis was public. Most ICs (like India) knew what was going on and the analysis was a major influence on national policymaking.

When Bush Jr came into power, the toxic masculinity of the GOP took over - it was all about "drill baby drill", a brutal decade of pointless wars followed. Putin seized power in Moscow and he was also in the pocket of RUs vast ONG lobby, so MEDEA was shut down.

After Obama came to power, he reinstated that effort and asked it to deliver a list of specific national security risks from Global Climate Change. This created a fountainhead of information - a kind of a socket that the national security policy machinery is permanently plugged into. As the Nat-Sec paper mill cranks out position papers (that vital commodity on which all real decisions are made), it uses information from this MEDEA inspired factual database.

In 2015 the group disbanded as its work was done.

The movie has taught me two things

1) Always trust my instincts - if something looks odd or unsupported - something is actually amiss and

2) There is a LOT MORE HARD DATA supporting climate change than even I had believed. Also the data has been COLLECTED BY COMPLETELY NEUTRAL and UNCONNECTED IC OBSERVERS in at least TWO SEPARATE COUNTRIES over SEVERAL DECADES before Climate Change became a fashionable media topic.  

That latter part is HUGE. WAY WAY BIGGER than what Trump thinks is the size of his "Hands".*

If you can - watch the documentary - it is worth the time.

* Given how intimately the Nat Sec paper mill and the MEDEA data sockets are connected, I suspect Trump and the GOP will not be able to use the position paper mechanism at all. This will gravely impair their ability to make sound national security policies (but that is a discussion for another post).

Tuesday, September 12, 2017

Was DPRK 6 a two-stage TN device?

As I indicated earlier I believe that for policy purposes one should treat DPRK's thermonuclear ambitions as a reality but at the technical level it is vital to go on asking questions.

At the present time, we only have seismic signatures of the DPRK 6 event. The RC data from air sampling has not been made public. Given the proliferation sensitivity of the isotope ratios, generally people don't make that information public.

So quite understandably there is a lot of back and forth about OS information that would conclusively point to DPRK6 being a two stage TN device.

A lot of people feel that DPRK could not have developed a device of this complexity so quickly. That sounds reasonable until you see that the DPRK timescale is quite comparable to others who pursued a very accelerated and aggressive development cycle.

Rough estimates of DPRK U-235 and Pu-239 stockpiles are available. There is a section in there about how DPRK might produce Tritium, but there are no numbers that we can take from that.  There is also a suggestion that irradiation of Lithium targets is possible in DPRK reactors, so the only thing that could limit DPRK's ability to make LiD (the fuel used in modern two stage TN devices) is the availability of Deuterium.

There does not appear to be any public discussion on where DPRK might get its hands on Deuterium. From public records DPRK does not appear to have any heavy water moderated reactors. There do not appear to be plants inside DPRK that produce heavy water. There is no public evidence of DPRK importing heavy water from any known sources.

If some public domain information should emerge that DPRK was able to successfully source D2O (Heavy Water) from somewhere outside the country - then one might be able to argue that there is significant public domain evidence that supports the notion that DPRK 6 was a two stage device.

Monday, September 11, 2017

The Dance of Mohini

I have long maintained that most of the ancient Hindu texts are meta-narratives that discuss common themes in proliferation and counter proliferation. Whether historically accurate or mere works of imagination, they contain ideas that are applicable in a variety of situations.

One particular story arc that sticks in my mind is the "Dance of Mohini". The original story has been discussed elsewhere in greater detail by numerous experts. For our purposes, I think we can reduce it to the following - the adversary is seduced into using their most potent weapon against themselves.

The attractive aspect of the "Dance of Mohini" is that it uses a subterfuge - a gesture of peace - as opposed to the traditional escalation framework. The subterfuge lulls the adversary into a false sense of security and that decline in security consciousness is used to bring the enemy to death's door.

In the story itself, the Demon King Bhasmasur is seduced into thinking that he is merely dancing with a beautiful woman who will soon be his bride. His longing and lust for the woman allows him to forget that his right hand possess the power of death. In the dance, the beautiful woman - Mohini raises her right hand over her head and King Bhasmasur does the same - turning himself into ashes.

Those of you who have followed discussions on the disreputable forum will recognize that this idea was proposed almost 15 years earlier in a different context. I can't say for sure if anyone listened to me back then but the idea has a je ne sais quoi about it.

Friday, September 08, 2017

Some observations on the aftermath of DPRK 6

I am not a Korea expert, so please consider this as just mere comments of an outside observer.

1) KJU has released photos of what appears to be a mock up of a two stage nuclear weapon. The mock up is most likely heavily influenced by open source information available on the W88 warhead. The current consensus on seismic signatures of DPRK 6 is that that the device likely achieved 160kT. Per KPA sources, the rough estimates of CEP of DPRK missiles are in the few miles range and they seem to think that 1 in 4 missiles will make it through the BMD screen.

2) Against that backdrop it is reasonable to conclude that KJU (and DPRK) is pursuing a path that will lead them to acquiring two stage (high fusion yield) devices. This path appears to be motivated on the technical side by concerns about poor CEP and  low OAR of their long range delivery platforms.

3) It appears that at some point in the past Pyongyang has expressed a desire for a deal (mimicking the outline of the IUCNA). It is difficult to know whether this is just a "blow-hot-blow-cold" ruse designed to deflect attention from KJU's true intentions on the matter or if this is a genuine interest among some group of people inside DPRK for a normalization of US-DPRK relations.

4) It is also difficult to gauge the extent to which KJU's behavior is driven by internal threats to his power. As KJU has not earned his place at the top of DPRK military force, he nominally holds the rank of "Wonsu" but he does not hold the rank of "Dae Wonsu" that his father and grand father held. While there is little doubt that the DPRK Armed Forces will follow his instructions, they may not do with the same enthusiasm that they followed his grandfather or father. This IMHO reflects a distance between KJU's actual stature and his desired stature that will likely perpetually cause him to feel insecure.

5) Given that backdrop, if KJU were to weaponize his nuclear devices, several uncomfortable questions arise about authorized use or the likelihood of these weapons falling into the wrong hands. This automatically opens a discussion on Permissive Action Links and related safety systems.

6) As there is a lot of trust deficit between DPRK and the US, and that handing an issue of this sensitivity to Donald Trump is discomforting, I wonder if there is something to be gained by conducting a simulation of a negotiation between KJU and the US. I imagine there are enough subject matter experts in the US, Japan and South Korea that one could simulate a decent KJU!

I am tempted to think that perhaps if one engages KJU in productive dialogue, then one might be able to probe the issue of PALS with him. In the event that the desire for normalization of relations with the US is genuine he may agree to locating the US supplied PALS on his warheads.

While there is a significant gap between KJU expressing an interest in US manufactured PALS on his weapons and actually having US teams on the ground that put these PALS on to his weapons, but this could serve an important trust building step.

Tuesday, September 05, 2017

DPRK 6 has most likely crossed the design capabilities of S2

Based on seismic estimates it appears that DPRK 6 has crossed the maximum stated capability of the S2 boosted fission design. Current estimates from seismology rest at about 100-300kT. There is some discussion about the contribution of the related tectonic event. One estimate puts that at around 34%. This would mean some 66% of the energy came from the device.

At the low end the estimates are grazing the highest values for S2 *test*. At the high end they are exceeding the proposed capabilities of the S2 *design*.

Unlike Pokhran, the Punggyi-ri site is a granite mountain, there are no pressing concerns on environmental issues. There does not appear to be a regional framework to deter DPRK from further testing. Donald Trump doesn't have what it takes to shut this down. Per ROK intel, preparations for DPRK 7 are complete.  If that implies that a seventh device has already been placed in the hole, then DPRK may be very far along the path to advanced design capabilities.  There are NO real caps on this.

In broader sense, the rapid march of DPRK from demonstrating basic fission capabilities to boosting and possibly even some modest fusion yields is a very peculiar and alarming case of vertical proliferation. What photos Kim Jong Un lets us see of his "warhead capable" physics packages clearly point to a great deal of open source studies by DPRK experts. While these photos are unlikely to be actual designs used by DPRK in its weapons, the photos point to a very large number of design studies that they have conducted on the matter. This is not unusual but the speed at which they are doing it is quite unsettling.

When the decision was taken to demonstrate a boosted design in 1998, the world was a very different place. India would have been the first nation after the P5 to demonstrate such an advanced design capability. Given India's economic situation at the time, it seemed prudent to restrict oneself to a design that was economically more ideal from a stockpile maintenance perspective. The OAR of various delivery systems was unknown at the time, so it also made sense not to overburden the system.

Even at that time, a number of people had argued against this. The results of the test itself were questioned by certain people. All those technical doubts became enmeshed in the politics of institutions and personalities and that added a certain lurid aspect to it. I welcomed the scientific debate as it was educational, but the ego clashes were distasteful.

Today while the leading lights of that group have passed on, the questions they raised linger in the minds of people. The doubts were so potent that they almost derailed the vital IndoUS Civil Nuclear Agreement discussions.

This is all archival, I am speaking of a simpler time when it was easier to understand what was necessary.

Today - not so much.  The writing is on the wall. It is best to acknowledge it as such.

 As Chappandaz put it - "The storm is coming."