Sunday, June 12, 2016

Can people be prevented from simulating TN designs?

Previously [1][2][3] we reviewed why it is necessary to focus on simulation capabilities in the context of the security of TN weapons critical information. Now let us see what (if anything) can be done to limit the access to such simulation data.

In this post - we explore two scenarios -

a) The Hacked Cloud: A scenario in which an existing cloud computation platform is hacked and simulation data or capability is gained by the hacker/s.

b) The Ground Up Approach: A scenario in which a sufficiently large cloud is setup and a simulation of the physical transport equations is attempted by a group of people determined to secure knowledge of the coupling between the equations.

Scenario A: The Hacked Cloud.

As I had indicated earlier [2], if a cloud used by a large entity (state/corporate) for nuclear simulations could be hacked, then the hacker would have access to enough information to design a nuclear weapon.

Unless the large entity deliberately wanted to proliferate the knowledge, they would undertake serious efforts to protect the data. So the hacker/s would have to first locate the cloud and then work their way through some very heavy defensive measures in order to get inside the cloud and then transmit the data out.

Let us assume for a moment that the tailored access is somehow possible but extremely resource intensive. Then only barrier between such hackers and the design critical information is their ability to know whether the cloud is capable of the desired computation or whether the cloud already has information on it that the hacker/s seek.

The simplest way that the hacker/s might know about such things is through a compromised employee. If one of the people working on the cloud or on the simulation is susceptible - that will be the weakest link the system.

Another way that hackers might learn of the nature of computations on a cloud is by analyzing its pattern of data transmissions. A cloud that is dedicated to the simulations of interest would have a very peculiar pattern of data transmission - it would compute for a very large interval of time and then send out a relatively compact data packet to the sender. This pattern would be very different from a cloud that was processing something else (such as a real time sematic analyzer for a consumer electronics application).

Yet another way a hacker can access information about the functioning of a cloud is by looking at the power consumption by the environmental systems attached to data center. A large energy load from the chiller systems is usually tied pretty closely to the utilization of the cloud. Anyone running a large device simulation would have a very peculiar fluctuation in the power consumed by the environmental system.

It is plausible that the hacker/s could gain sufficient intelligence on the nature of the cloud and its computations and then set up a detailed assault on it to secure the information they desire.

Securing any existing clouds against such attacks is critical to preventing this.

Scenario 2:  The Ground Up Approach:

On the face of it - this approach may seem more resource intensive, but it has certain things going for it. In my opinion this is the scariest scenario of all.

Firstly whoever sets up the cloud has complete control over how it is implemented. This kind of captivity is attractive as they can deliberately tweak it to perform the desired computations more efficiently.

Secondly the cloud can be used without the fear of interference. Apart from a relatively minor set of energy signatures (minimal if the cloud is co-located with a commercial facility) the cloud would be practically hidden.

Whoever uses this approach has to identify the exact set of computations to perform and the right way of performing them, As I discussed in earlier posts, the main aim of the computations is to determine what the relationship between compression and criticality of a nuclear reaction is, so one simply has to define two sets of equations, the first which capture the dependence of the reaction rate on density and temperature. And a second set of equations that describes the flow phenomena and buildup of pressure inside the reaction vessel, Some of the coefficients in these equations are published in open literature, others can be inferred and any couplings between these equations can be addressed through either repeated simulations or a physical model of the coupling itself.

A lot of codes are available to carry out density dependent reaction rate simulations. It is very painful to do a model with many reactions with different cross sections on a piece of paper, but this sort of thing can be handled quite decently in a simulation.

A number of codes are available to carry out fluid dynamics modeling. As most of the reactions of interest happen on the nanosecond or lower timescale, the simulation framework has to correctly account for fluid flow. Also the fluid in the simulations of interest is moving at high Mach number (one has to compare the speed of the fluid to the speed of sound in the same medium) on those timescales. The modeling of fluids moving at extremely high Mach number is problematic as it has to account for turbulence. The main problem in accounting for turbulence is that energy dissipation occurs in a fractal fashion on a variety of length scales. This kind of turbulence can break up a compression front and create mixing between the reactants. There are very few reliable models or codes out there for describing the behavior of turbulence phenomena on nanosecond timescales.

The lack of such codes represents a major technical hurdle that the person/s using the "Ground Up Approach" have to confront. If they choose to build their own code - they will need to validate it against other high Mach simulation frameworks. An efficient way of validating your code against others is to publish it and attempt several benchmark problems. There are number of groups doing this for purely academic reasons and it will be difficult to detect the people doing this for non-academic reasons.

Assuming that a well tested set of reaction rate modeling and fluid mechanics simulation are available, the next set of technical challenges will come from the coupling between these equations. There is a great deal of published literature on how to couple Navier-Stokes equations with transport models like the Poisson -Nernst-Planck equations used to describe high energy density plasma behavior, but these models in the public domain do not contain information about couplings that one might encounter in the context of a high fusion yield TN weapon. It is an open question whether the couplings can be approximated by some density functional coupling approach but whoever wants to do this kind of simulation will have to find a way of testing the couplings.

Currently only a few countries have the ability to test such simulations. There are a few experimental setups where these couplings can be explored in a real-world event.

The unavailability of fluid dynamics codes for high Mach number flow simulation and the difficulties associated with testing the couplings between the flow equations and the other transport equations (mass, energy etc...) in the system is a natural barrier to entry for any party seeking such knowledge.  Unfortunately any strategy that seeks to artificially raise this barrier would likely result in this party making a maximal effort to breach it. As a professional physicist I cannot completely discount the probability that an alternative and innovative way to couple the underlying equations informatively can be found by a sufficiently motivated group of individuals. Once that couplings become known there will be practically nothing standing in the way of these individuals and a simulation of a TN design. There is a need to proceed with extreme caution on this front.


0 Comments:

Post a Comment

<< Home