Thursday, May 22, 2008

Technical comments relating to testing, and simulation

I am attempting to parallel the discussion on the forum. I will add to this list as the discussion progresses.

I am reproducing my comment from a previous thread.

Usually in an high energy physics experiment, you have a beam and a target and then you have an array of detectors situated around the event site. Whenever the beam is turned on - the accelerated particles in the beam strike the target and produce exotic conditions that do not exist in normal matter.

Such collisions create a spray of nuclear fragments which strike the detectors surrounding the target. In order to figure out the details of the collision - data collected from various sets of detectors has to be carefully reconciled and usually this requires a really complicated audit of each detector and its performance.

In any high energy physics experiment - there is an immensely involved theoretical framework to describe what sorts of fragments will be created in a collision. Usually theories of the physics underlying a collision are tested using computer simulations. People who do these simulations usually act and talk like they know what is going to happen when the beam collides with the target. However, at the end of the day - everything they say is just the result of a simulation - which could be wrong or right.

At the end of the day - every simulation only very delicately mirrors the experimental conditions and it is a highly involved and time consuming task to reconcile the results of the simulation with experimental signatures obtained from the detector.Typically - this sort of work takes many many years at places like CERN or Brookhaven or Fermi lab etc... Often times promising theories of what really happened are rejected by new suggestions that emerge in hindsight.

In each experiment - the physicists charged reconciling the results of the experiment to the results of the simulation are hostage to the amount of time the beam is up and running. The longer the beam stays on - the more accurate is the statistical sampling of the collision processes. Reconciling theory and experiments with with low statistical data require decades of experience in this kind of work and no amount of computer modelling is a substitute for experience in picking out good physics.

In general, anyone can poke holes in the detection scheme, the simulation results, and the data analysis. That is easy to do - the hard part is to suggest viable alternatives to interpret the data or suggest a way to use the existing experimental setup to gain insight into the underlying physics. Usually making that kind of commentary requires a lot of experience and exposure to experimental physics and someone who does a PhD in theoretical physics does not necessarily *automatically* have the background needed for this kind of work. Please understand a kid in 1st standard will tell you - "you need to do this experiment again" or "you need a bigger accelerator" etc... but this does not constitute a scientific peer review of any kind.

Ultimately I feel a nuclear test most analogous to an HEP accelerator experiment where the beam can only be turned on intermittently due to cost issues and where only a limited number of people have access to all the information needed to interpret the results fully. In any lab there are usually as many theories of the experiment as there are physicists willing to sit over a cup of coffee. This is the hallmark of a good lab - motivated people indulging in spirited discussion. It is all too easy for scientists in their excitement to demand more beam time and more detectors but ultimately the big boss of the lab has to reconcile the costs of this kind of stuff to the needs of the political system. That is the Boss's job description.

*** Start of comments on present discussion ***

1) Taking soil samples from the area does not imply "doubting the results of the test".

Typically you maximise the amount of information that you can collect from a test event and the soil samples represent an indirect gauge for what happened in the event. You combine information from various measurments to draw a consistent picture of the physics of the event. This is not unusual - this is infact extremely sensible experimental practice. As to whether this is/not done in other countries where tests are conducted - I cannot comment on that.

Per R.C's article on SAAG

" The post - shot radioactivity measurements [22] on samples extracted from the thermonuclear test site have confirmed that the fusion secondary gave the design yield."

Here reference 22 is : “Post-shot radioactivity measurements on samples extracted from thermonuclear test site” by S.B. Manohar, B.S. Tomar, S.S. Rattan, V.K. Shukla, V.V. Kulkarni and Anil Kakodkar, BARC Newsletter, No.186, July 1999.

" From a study of this radioactivity and an estimate of the cavity radius, confirmed by drilling operations at positions away from Ground Zero, the total yield as well as the break-up of the fission and fusion yields could be calculated. A comparison of the ratios of various activation products to fission products for the 15 kt device and for the 45 kt thermonuclear device also shows that these ratios are in agreement with the expected fusion yield in the thermonuclear device."

Furthermore the article also says:

"As mentioned earlier, we have not given the fusion-fission breakup and, since we have not given the composition of the materials used nor their quantitites, for reasons of proliferation sensitivity as mentioned earlier, no one outside the design team has data to calculate this fission-fusion yield breakup or any other significant parameter related to fusion burn."

2) The claim that a boosted fission *device* was tested is made in R.C's article on SAAG.

"The thermonuclear device tested on May 11 was a two-stage device of advanced design, which had a fusion-boosted fission trigger as the first stage and a fusion secondary stage which was compressed by radiation implosion and ignited. For reasons of proliferation sensitivity, we have not given the details of the materials used in the device or their quantities. Also, our nuclear weapon designers, like nuclear weapon designers all over the world, have not given the fusion component of the total yield for our thermonuclear test."

We did not have the luxury of testing the FBF core separately from the two-stage device.

The article then goes on to say

"We tested our thermonuclear device at a controlled yield of 45 kt because of the proximity of the Khetolai village at about 5 km, to ensure that the houses in this village will suffer negligible damage. All the design specifications of this device were validated by the test. Thermonuclear weapons of various yields upto around 200 kt can be confidently designed on the basis of this test."

Furthermore,

"Thermonuclear weapons of various yields upto around 200 kt can be confidently designed on the basis of this (my comment Shakti I) test."

The Hon. Webmaster is technically correct that per GoI official statements, what was tested on May 11, was a two stage device with a boosted fission device as a primary, and not a "boosted fission warhead".

There is no information available in the public domain that sheds light on what RC's comment about scaling up the yeilds means. You could say that the boosted fission primary design could be optimised to achieve ~ 200kT of yeild by achieveing a cleaner fission burn or you could say that that India needs to light up the second pure fusion stage and get to ~200kT.

Consequently Indian nuclear weapons development remains a black box to external observers.

I think the debating position that the Hon. Webmaster has been taking is - we need a pure fusion burn to get to higher yeilds. I am opposing that point of view.

There is no debate on the fact that no *warhead* of the "200kT" yeild has been tested. There is also no disagreement on the fact that no *device* exceeding 40-50kT has been tested.

3) Discussions of the depth of burial, size of crater/retarc, teleseismic estimation are too sensitive to the seismic details of the site.

The seismic details of the site, the height of the water table etc... at Pokharan have never been released to the press. No discussion has been made public regarding the evironmental impact of nuclear tests in the region. What exact impact these may have had on the choice of the burial depth cannot be stated at this time. In his article on SAAG, RC alludes to a number of considerations - eg. the welfare of Khetolai residents, venting of the cavity etc... playing a role in the determining the depth of burial. It is therefore unreliable/not a trivial matter to extend things like the depth of burial to notions of what the design yeild of the devices may have been.

The point about the shaft having been dug earlier is well made. It could also be that the considerations at the time of digging the shaft and the considerations in 1998 were different.

4) The term "wargaming" is too vague.

It means any number of things. A game can have a detailed, intricate and involved calculation (of questionable accuracy) involving hundreds of weapons. A game can also have a fairly simple (perhaps "overly simple") calculation involving only one notional nuclear weapon.

Within the framework of deterrence - a publicly played wargame has utility in a specific context.

For example, if a government chooses to publicly entertain wargames involving large nuclear arsenals vis-a-vis a specific adversary - they will make public the detailed information necessary for the adversary to reach the conclusions it wants.

Alternatively, if a government wants to only have a simpler game played in public, it will release information consistent with only one notional nuclear weapon.

It is not a simple matter to transit between these two game scenarios in a serious academic exercise and it is a very nauseating to go back and forth between these in a single conversation.

4) B. K. Subbarao's point about interference effects is incorrect.

Interference effects are detectable in a particular wavelength range and are sensitive to the coupling between the surrounding medium and device yeild. Unless you know all those things, you can't make any seismic estimates of the events nor can you make statements about the nature of interference effects.

5) V. Sunder's analysis was corroborative regarding demonstrated yeilds but not sufficient to make statments about the design yeild.

I was one of the people who reviewed the paper by Ramana, Thundyil and Sunder. The paper was an accurate survey of available literature on the yeilds. V. Sunder's analysis supported the idea that the yeilds stated by DAE were achieved. It is not possible with the available information to refine the analysis beyond this point to make accurate guesses as to what designs were actually comprised of.

Ramana, Thundyil and Sunder may have had guesses about the design yeilds, but at the time they chose not to include it in the publication. If he has subsequently changed his mind, the thought process that led up to that work is (afaik) not reflected in published work.

Also it may be noted that the paper by Ramana, Thundyil and Sunder was poorly recieved by the NPA community as it contradicted their basic contention that the tests were a complete failure.

If the authors are keen to revise their previous publication, they should consider writing up a complete manuscript and submitting it to Current Science.

Saturday, May 17, 2008

The Jaipur Carnage: Search for Answers Will Take Time

I wish to start by offering my condolences to the families of the people who suffered in the recent blasts in Jaipur.

A police investigation is underway, and from available media reports on the investigation - it appears that perpetrators shifted some elements of the modus operandi. It is not clear to me if this was simply a ruse to throw off investigators or if this is an actual shift in the network of groups that form "Terror Inc."

Establishing the actual chain of responsibility for such events is always difficult at best and it will take time - I urge people to be patient.

I wish to remind people that if this terror is perpetrated from Pakistan - the overall aim will be to polarise the Indian public opinion. There is very well established Hindu-Muslim divide in India and there is an industry of sorts dedicated to exploiting that. Old timers may recall that during the 1993 blasts in Mumbai - the objective of the Pakistani participation was the same - to exploit the Hindu-Muslim divide to secure a more positive disposition towards Pakistan among India's Muslims. If one is serious about defeating the perpetrators of Pakistani sponsorred terrorism - then one has to remove religious animosity and hatred from one's heart.

Let also put it in a more practical way - targetting India's Muslims does not make the police's job easier. It only create more mistrust and breeds animosty - on the whole it does not help us fight this problem.

I note that people in their rush to discuss things are forgetting that press coverage is the oxygen that most terrorist groups thrive on. There is talk of devising "comprehensive strategies" to deal with the "problem of terrorism" and deal with things on a "war" footing. Perhaps a media silencing strategy should form a part of the comprehensive solution. I think CM Raje's idea of setting up mechanisms to enchance the communication between State and Central counter terrorism resources is a reasonable one. It may be worthwhile for the IT gurus to think about that.

Aside of that there are a few more serious questions that I feel need to be asked at this point -

1) Has the degradation of the Pakistani Army-Mullah relationship after Lal Masjid - led to a decline in the ability of the Pakistani Army to control the Jihadi groups? and

2) Is the declining political stature of the Pakistan Army inside Pakistan being interpreted by as incentive for independent acts by terrorist groups?

The two questions sound similar but there is a subtle difference between them and I leave it my readers to debate this aspect.