I. Introduction to FARS 1 .I Time-Domain FARS

0 downloads 0 Views 721KB Size Report
(Thin-Wire Time Domain) [3, 41. (Thanks are due to Jerry Burke .... rather than a developer of computer-modeling codes). = Also ideally, tradeoffs between ...
Edmund K. Miller 3225 Calle Celestial Santa Fe, NM 87501 (505) 820-7371 (voice/fax) e.millerQieee.org (e-mail)

Keywords: Electromagnetic radiation, electromagnetic transient analysis, error analysis, modeling, software verification and validation

been found to be in agreement with a different approach to radiation from a sinusoidal current filament, as reported by Schelkunoff and Feldman [2], and presented in the June, 2000, PCs for AP column.

T

The purpose of this brief discussion is to report the extension of FARS to the time domain for wires, as implemented in TWTD (Thin-Wire Time Domain) [3, 41. (Thanks are due to Jerry Burke for adding FARS to TWTD.) While TDFARS also has not been proven to provide the quantitative distribution of energy radiated by an object over time, for c0nvenienc.e in presenting the results that follow, the discussion will proceed as though this is the case.

his column includes some results obtained from a timedomain implementation of FARS (Far-field Analysis of Radiation Sources), denoted as TDFARS, the frequency-domain version of which has been previously presented hereand elsewhere. It concludes with a discussion begun in October, 2000, on the problem of validating CEM results.

I. Introduction to FARS A procedure called FARS (Far-field Analysis of Radiation Sources) was described by Miller [l] as a possible means of determining the quantitative contribution per unit length or per unit area to the power radiated from some object in the frequency domain. The extension of FARS to the time domain (TDFARS) is presented here, together with some representative results.

The frequency-domain version of FARS (FDFARS) employs the usual surface-source integration to evaluate the far field, differing basically in the “bookkeeping” ofhow the far fields and radiated power are handled. In either case, the total field at each far-field observation angle is obtained by integrating the source distribution over the object under consideration. In addition, FDFAFS keeps track of the incremental field contribution at an observation point due to each wire segment or surface area into which the object is divided. The conjugate dot product of the total field with the incremental field is then formed, and integrated over all observation angles. The result is defined as the FDFARS power contributed by that segment or area to the total radiated power. By definition, an integration of the FDFARS power over the object must equal the total far-field power. While the FDFARS results have not been proven to actually represent the per-unit-length or -area power radiated to the far field, they seem consistent with expectations. For example, the feed-point and ends of a straight-wire antenna are found to be the largest contributors to the radiated power. A circular loop exhibits, on average, larger FARS values over its perimeter than a straight wire of the same length, a result expected from its curvature. Similarly, a square loopof the same overall length exhibits increased FARS values at its corners. Furthermore, FDFARS has

1.I Time-Domain FARS (TDFARS) Time-domain FARS, or TDFARS, works in essentially the same way as in the frequency domain, butwith the addition of tracking the time variation of the power and energy radiated by an object being studied [ 5 ] . For example, the total far E field is obtained as a function of time and observation angle, for a wire of contour C(r’) , from

1

E(r,t)=-k 4nR c(r‘)

Rdw’

where r and r‘ denote observation- and source-point coordinate vectors, R = r - r’ , and R IRI . The observation and source times are indicated by t and t ‘ , respectively, and the retardedtime is given by t’ = t - R/c . The incremental far E field from segment i, ei (r,t), is given by Equation (l), with C(r’) replaced by Ci (r’) . The incremental andtotal far-field power-flow densities at r, pi,TDFARS (r,t) and P ( r , t ) , thus become

and

-

respectively, with rl, being the free-space wave impedance.

BROADSIDE SCATTERING CENTER-FED DIPOLE

D-TDFliRSIOOScattLEDreD

By integrating over 4z steradian observation angles, the total TDFARS power/segment, ~ , T D F A R S( t ) at time t, denoted the linear power density (LPD),is given by

withthe total TDFARSpowercoming segments, so that

fromsummingover

all

with Prod ( t ) being the usual radiated power. The per-segmentlinear energy density (LED), and the total energy radiated as a function of time, can then be written as

Figure 2. The linear energy densities per segment for a IO-m wire, excited at its center as an antenna and by a broadsideincident plane was as a scatterer. The results are normalizedto a space-integrated value of unity.

h

Z8e-5 o)

5

-

BROADSIDE INCIDENCE FORLEDS LEDs FOR 80 deg FROM BROADSLOE

UTDFARS100ScafUECset~.

and

1.2 Some Results from TDFARS The result of applying TDFARSto a straight-wire dipole, excited at its Gaussian center aby voltage pulse ,

with

a=4.2x10ssec-'

and

Figure 3. The linear energy densities per segment for a 10-m wire scatterer, excited by unit-amplitude plane waves incident from broadside and 80° from broadside (the right end in this plot).

is shown in Figure 1, where q,TDFARS ( t ) is As should be anticipated, the integrated energy appears to increase monotonicallywith advancing time, although it should be plotted at several time steps. The 10 m wire has 101 segments, a noted that not all energy has been radiated away by the last time radius of 1 mm, andthe time step is 3.30253~1O-'~sec, chosen step shown on this figure. The near-end radiated energy increases such that cAt = Ax. by a factor of about 3.5 between time steps SO and 999, which seems consistent with the expectation that once currentand charge pulses have beenset into motion, an obvious placefor radiation to occur is at the wire ends, dueto reflection ofthe Q/I pulses there. Aside fiom the fact that the LEDs for this problem vary smoothly -TTSS00 S750 -75250 along the wire, as contrasted with their oscillatory nature in the TS 175 to the LPDs for a frequency domain, they are qualitatively similar TS 100 TS7S straight-wire antenna in the frequency domain, having their largest values at theends and feed-point. It's worth noting that the TDFARS curves of Figure 1 bear some resemblance toexact, analytical expressions forthe energy perunit length leaving atransient current pulse with Gaussian time variation, as reported by Smith and Hertel[61. t,,

= 1.S x 10-8 sec,

-

1

21

41

61

81

101

SEGMENT NUMBER

Figure 1. The linear energy densities at time steps 50, 75, 100, 125, 175, 250, 500, 750, and999, for a wire excited at its center by a Gaussian voltage pulse.

Perhapsnot so obviousis the reason for thecontinuing increase in energy that comes from the region around the wire's center, where the exciting voltage has been applied. Note that the level of the exciting voltage exceedslo-'' for only about 20 steps, and it is essentially zero beyond the 30th. Yet, the y,FDFARS (t) values nearthe center of the wire increase by more than a factor of two after this time. An answer to this question isstill being sought.

Shown in Figure 2, for the wire of Figure 1, are the LEDs at time step 999, when the excitation is a broadside-incident plane wave. The LEDs for the antenna at the same time are included for comparison, with each curve normalized to an integrated value of 1 J. The ends of the wire for the scattering case are seen to be less predominant than for the antenna. Finally, in Figure 3 the scattering LEDs, again at time step 999, are shown for broadside incidence and 10” from axial incidence. For the latter situation, the ends of the wire are seen to have the strongest LEDs, with those at the end away from the incident wave being larger than those on the near end, analogous to what is seen in FDFARS results [ 13.

* Ideally, accuracy would be a quantity that is “dialable” by the modeler (here, the modeler is taken to be someone reasonably knowledgeable about CEM, but who works more as a “user,” rather than a developer of computer-modeling codes). = Also ideally, tradeoffs between accuracy and efficiency should be explicitly accessible to the modeler, i.e., if the number of unknowns is increased, how might the error (uncertainty) go down and the cost go up?

There is quite a variety of ways by which the numerical accuracy and physical relevancy of computed results might be checked. Included among these are tests based on:

2. Validating CEM Results II In the October, 2000, PCs for A P , I briefly revisited a topic that has been considered previously many times in this column [7]. Earlier discussions were addressed to a variety of validation issues, initially instigated by Leo Felsen at the 1987 Blacksburg A€’-S meeting, and which led to validation workshops at the 1989 and 1990 A P - S meetings. More recently, I’ve been harping about how little published material seems to include any quantitative validation statements concerning the numerical results that are presented. This was the problem mentioned in the October column, as a result of my hearing a number of presentations at the Salt Lake City meeting in which validation, atleastin a quantitative sense, seemed to be largely ignored. A more-detailed discussion of my perspective on some aspects of validation can be found in Miller

PI. Here, I’ll include a number of typical examples of numerical results described in a subjective, non-quantitative way, as they appeared in the APLS Transactions a few years ago. I’ll contrast these examples by presenting what seem to me to be viable and easily developed alternate descriptions of the same data, but given in a more quantitative-and therefore more useful-form,for the reader. I’ll also include a couple of examples where the authors do actually present their data in quantitative terms, to demonstrate how straightforward this can be. First, however, I’ll discuss some general ideas on the topic that I’ve formed over the years, and offer some suggestions for how I think that CEM software might be improved in terms of validation.

2.1 Some Comments and Observations Concerning Validation In spite of the tremendous amount of CEM software development and application that has gone on over the past 35 years or so, one of the most time-consuming activities associated with CEM remains that of verifying code (software) performance and validating the model results. Few available computational packages offer the user any built-in assistance in resolving the numerical, let along physical, validity of the results that the code provides. First, let’s agree that model accuracy is important for a variety of reasons, including the following observations:

reciprocity energy conservation near-field behavior and boundary (residual) error * “non-physically” appearing or actual behavior

numerical convergence 0

comparison with experimental measurements It’s probably worthwhile, at this point, to consider just what validation might mean in the context of CEM results. From the 1975 edition of the American Heritage dictionary, we find that “to validate” is to: declare or make legally vaIid, mark with an indication of official sanction, substantiate or verify, of which the latter is most relevant to the present discussion. Note, also, that the two words “verification” and “validation” are different, but comparably important, aspects in CEM and other computer modeling. Verification is the process of determining that a computer code produces results consistent with its design. Validation is concerned with establishing howwellthe computed results conform to physical reality. Obviously, verification is a necessary, but not sufficient, condition for establishing acceptable code performance. Validation determines essentially how reliably the results for a particular problem, generated by a given code, can be used for analysis and design of actual hardware. While the general concepts of verification and validation are clearly important in the abstract, let’s consider some specific instances where they are essential in practice: When moving codes between computers. e

Inaccurate but efficiently and easily obtained results have no value, and can even be harmful, i.e., getting wrong results faster is not a useful endeavor.

comparison with independent numerical results

To confirm continued valid operation ofa computer code over time on a given computer system. To provide guidance to the modeler concerning the validity of the computed results.

Results of unknown accuracy are of similarly questionable value.

When the computed results are used in design applications.

Accomplishing these goals might be best realized by establishing a protocol for estimating modeling errors or uncertainty, using such things as: pre-computed and stored test cases in a code;

2. an equation error, e.g., E , ~= L(s,s’)lsol(s’)- E,,, (s) (or residual), which comes from the equation mismatch that can occur in the numerical solution because of round-off due to finite-precision computations, or, when using an iterative technique, because of limited solution convergence.

standardized error norms; internal checks, i.e., those that can be done within a modeling code itself; external checks, those that require data from other sources; to give the modeler some sort of “figure of merit” concerning the results that are obtained. (I expect to discuss the concept of a figure of merit in a future column.) From the user’s perspective, a variety of problems can arise when exercising a modeling code. Being unable to resist a compulsion to categorize such things, I will assign these problems to one of four types, as follows: Type 0, the code “bombs.” This might be due to system software or the modeling code itself, both of which are relative straightforward to diagnose. Type1, the code runs, but produces obviously wrong results. Again, this might be due to system software or the modeling code itself, with the latter caused by a formulation, implementation, or coding error. Type 2, the code runs producing physically plausible results that are wrong. As before, this might be caused by system software or the modeling code. This type of error is probably the most difficult to handle, because the modeler is less likely to recognize that there’s a problem, besides which the cause is less obvious. Type 3, the code produces useful results that are misinterpreted, mistrusted, or misused by the modeler. This sort of situation can occur for various reasons, including unrealistic expectations, unwarranted skepticism, and/or blind faith on the part of the modeler. The actual errors associated with any computer model can themselves be defined in various ways. Two primary sources of modeling errors I’ve found useful to consider are the Physical modeling error ( E ~ ) ,which arises from replacing the real physical problem of interest by an idealized mathematicalinumerical representation. Numerical modeling error (E,, ), which comes from obtaining only an approximate solution to that idealized representation, which itself is comprised of two components: 1. a solution error, E[, = Isol (x) - It,, (s), where s is some observation variable, which is the difference that can exist between the computed results andanexact solution even were the linear system of equations to be solved exactly, due to using a finite number of unknowns; and

For most numerical solutions,

geq

-

with

being the

and the solution error is of priround-off error, so that E, mary concern, the reduction of which is usually achieved by increasing the number of unknowns, X,.Also, generally speaking, X, can be made large enough so that E, < E,, . Experimental data provide the only way (aside from some special cases, like the PEC sphere) to evaluate E ~ A. variety of ways are available to asses E , , including comparison with other analytical or numerical results, examining convergence, evaluating known requirements of a valid solution (such as boundary error, reciprocity, energy conservation, etc). It has also been observed that E, generally decreases exponentially as a function of X,. However, when the modeling code is being used and its results in a specific application are to be validated, the essentia! requirement is to develop some error measure, to establish whatever might be the discrepancy between the model result and some reference. An error measure might be a scalar quantity, e.g., the total radiated power, or a vector quantity, e.g. a radiation pattern. It also might be local, e.g., input impedance, or global, e.g. currentkharge distributions. It should be suited to the routine test of such situations as: all parameter variations of a given problem for which results have previously been validated; to test a new kind of problem to which the code being employed has not been applied; for comparison with experimental or other independent results; and for design where tolerance bands have been specified. Furthermore, from the perspective of the modeler, it would be extremely helpful were the code being used to be designed to perfonn some sort of internal error analysis, and to provide hints about what might be the cause of incorrect results and how the problem might be corrected. It would greatly assure the user that a modeling code continues to work correctly if it provided a sequence of intermediate and final answers for several distinct “calibration” problems, in much the same way that a PEC sphere has been used to calibrate an RCS range. Additional less-detailed test cases should be provided as part of the software package, including both the computer model and the numerical results it produces. As is the case with NEC and several other available packages, an expanding library of computer models should also be included, so users don’t always need to develop their own from scratch, thereby avoiding another common source of error. Finally, to elaborate on a point briefly mentioned above, it would be extremely valuable were a computer code to have a capability for the user to specify the accuracy desired-or, equivalently, the acceptable uncertainty-of the results being computed. This could lead to an explicit tradeoff between accuracy and the cost of getting the results. It would also promote the process of designing an EM component to some specification: initially specifying a lesser accuracy as wider parameter variations are employed, and tightening the accuracy as the desired design is approached. Being able to specify model accuracy is also closely connected to the idea of adaptive modeling, i.e., varying a computer model through suc-

/fff~jennusundPropugu~on~u~u~~~ Vol. 42, No. 6, December 2000

95

cessive generations to systematically reduce and allocate the error ina desired fashion. This would permit the computational resources to be allocated to the most critical areas ofthe problem.

2.2 Examination of Some Examples from the AP-S Transactions Following below are six examples-rather typical, I believe-, of how numerical results are discussed in the AP-S Transactions. These are taken from issues published several years ago. For each of the graphs, the description given by the author(s) of the article is excerpted, and this is then followed by a suggestion or two of how I think the same results could have been accompanied by a more quantitative characterization. The two concluding graphs and their author descriptions are included as counter examples, to show that not only do some authors actually describe their results quantitatively, but to emphasize how little additional effort this needs to entail. Please note that while I suggest, several times below, that RMS error measures might be suitable for quantifying the uncertainty, or accuracy, of the results being discussed, there are a number of alternate ways that this might be done. The most appropriate error measure would seem to be one that is relevant to the application for which the model computation is intended. For example, in a direction-finding system, it would be the location and depth of pattern minima that are most important. When interference is the major concern, then the sidelobe maxima could be most relevant. For RCS applications, angle locations and magnitudes of the scattered field provide the best error measures. Please also note that in choosing the examples included below, I have selected examples where at least two sets of results are presented by the author(s) for the problem being studied. In this respect, these examples are far ahead of many articles, where no validation by means of independent results is exhibited at all. [Editor’s note: While the following examples are based on figures taken from papers that have appeared in the IEEE Tvansactions on Antennas and Propagation, the references are not given andas much identifying material as possible has been removed on purpose. The intent of using these figures is not to be critical of the authors or the papers involved. Rather, these are simply considered to be typical of Transactions figures illustrating the points in this column. WRS] Example 1: “The computed values fromour code agree very well with measured data for both HH and VV polarizations” (Figure 4). An average difference in dB or an RMS match-or some other error measure appropriate for the anticipated application of their code-could have easily been calculated by the authors. That would have given the reader a numerical value for the data quality that would have been more informative than saying that they “agree very well.” They might have said that these results, plotted on a dB scale, appear to exhibit a match for HH polarization with the measured results (the solid line) within about 2 dB up to about 60°, where a few points are 5 dB or more apart. Also, it appears that more computed data points are needed to adequately define the minimum where the difference is, not surprisingly, largest. Example 2: “The two solutions compare well...” (Figure 5). Calculating an RMS dB difference between the two sets of results would have been useful to give the reader a global measure of their

agreement. The authors might also have given the maximum dB differences and where they occurred, something that the reader can approximate, but which would have been easy for the authors to do. Such quantitative comparisons are easily done, and will likely lead the reader to conclude that the results do, indeed, “compare well.” The authors did discuss, in some detail, possible explanations about why the differences seen with respect to some earlier results in this same article did occur. Example 3: “...there is excellent agreement...over more than 50 dB dynamic range” (Figure 6). The above comment refers specifically to how the long-dashed, or upper, curve compares with the solid line. The differences are, indeed, almost graphically indistinguishable over much of the frequency range. This fact might have been emphasized by computing some RMS or average difference between the two results. The authors might also have said something like, “the difference is less than XX dB over all frequencies, except between -and -.” Example 4: “The calculations agree reasonably well with the measurements” (Figure 7). The data appear to be somewhat shifted in frequency here, a point that might have been made bythe authors, a difference that is exhibited fairly when comparing two sets of data. Also, it would have been easy to give the maximum 20 7

I

30 60 Observation Angle ROsdeg.

0

90

Figure 4. Example 1: “The computed values from our code agree very well with measured data for both HH and VV polarizations.”

-10 -.--LiO

-60

-40

-20

0

20

40

60

THETA (DEGREES)

Figure 5. Example 2: “The two solutions compare well....”

I

I

I

I

Expressing the results in dB, as is done here, greatly exaggerates the differences. This can possibly be misleading, although it is certainly compatible with how reflection loss is presented. Expressing these differences as normalized absolute values may be more appropriate in terms of establishing the model validity, however.

I

Example 6: “...when the staircase data is smoothed out very little difference is observed with respect to the exact solution” (Figure 9). The authors present their numerical results in a “staircased” form because that is what their code produces. Most readers would probably not argue had they shown their model results linearly interpolated between the staircase samples, which would seem to be a very reasonable thing to do. I think it would be reasonable to calculate an RMS difference between both the staircase and

0.0

3.0

6.0

9.0

12.0

15.0

18.0

Frequency (GHz) Figure 6 . Example 3: “...thereis more than 50 dB dynamicrange.”

excellent agreement

...over

-30

8

9

11

10

12

13

Frequency (GHz) Figure 8. Example 5: “The agreement between our numerical results and measurements is good....” [The circles are the values predicted in the paper; the dashed line is the result predicted in a reference; and the solid line is the measured result from a reference.]

Figure 7. Example 4: “The calculations agree reasonably well with the measurements.” [The solid lines are the theory, with the circles for a frequency of fo and the inverted triangles for a frequency of 1.08f0 ;the dotted lines are the experiment, with the squares for a frequency of l.lOfO and the triangles for a frequency of 1. 18f0.]

difference between the measured and computed result for each value of x,, as well asthe RMS difference, judging from the amount of data available. Example 5: “The agreement between our numerical results and measurements is good...” (Figure 8). I’m not sure that someone casually examining this data would agree that, even subjectively, the agreement can be described as “good.” It would seem reasonable here to calculate the normalized, absolute difference between the computed and measured results as a function of frequency.

0.

45.

90.

135.

180.

225.

2 7 0 . 360. 315.

Angle (degrees) Figure 9. Example 6 : “...when the staircase data is smoothed out very little difference is observed with respect to the exact solution.’’

/EEE~~e~ussndPfupugutiunOnagcm%re, Vol. 42, No. 6, December 2000

97

,

I ,

1

I’ll conclude this consideration of validation with a few additional comments in the next column.

3. References

I

c

%

0.6

-

1. E. K. Miller, “PCs for AP and Other EM Reflections,” 41, 2, 1999, pp. 82-86; 41,3, 1999, pp. 83-88; and 42,3,2000, pp. 84-89. 0.3

-

0.2

-

0.1

-

2. S. A. Schelkunoff and C. B. Feldman, “On Radiation from Antennas,” Proceedings ofthe IRE, 30, November, 1942, pp. 511516.

3.6

3.8

4

4.2

4.4

Frequency(GH2)

4.6

4.8

5

Figure 10. Example 7: [The solid line is “Method 1,” the result of the paper; the dotted line is “Method 2.”] interpolated numerical results with the exact solution, shown here by the smooth curve. This would give the reader a quantitative idea-not easily determined from the graph-of how big that “very little difference” is, which does indeed appear to be small. The next example-one of the only ones I could find in the numerous issues through which I looked specifically to find it-is one case where the authors do make what I consider to be useful quantitative comments regarding the results presented. Example 7: Here, I’ll simply quote the relevant part of the authors’ discussion concerning this figure (Figure 10). “The figure shows that the resonant frequency calculated by the [Method 11 technique is near 4.19 GHz with a 2.1 VSWR bandwidth about 5%, while the resonant frequency calculated by [the second] method is 4.22 GHz which represents a difference of 0.72%. Also, the minimum reflection coefficient is near zero for [Method 11 and 0.078 for [the second] method. Overall, the two curves match quite well, excluding the fact that one curve seems to be shifted from the other by an amount of 0.03 GHz.” They go on to speculate about why this shift occurs, and then present the input impedance in the next figure. The comparison of the authors’ approach and “Method 2” for the impedance is not quite as good, but they don’t discuss this at all.

3. E. K. Miller, A. J. Poggio, and G. J. Burke, “An Integro-Differential Equation Technique for the Time-Domain Analysis of ThinWire Structures. Part I: The Numerical Method,” Journal of Computational Physics, 12, 1973, pp. 24-48 4. A. J. Poggio, E. K. Miller and G. J. Burke, “An Integro-Differential Equation Technique for the Time-Domain Analysis of ThinWire Structures. Part 11: Numerical Results,” Journal of Computational Physics, 12, 1973, pp. 210-233. 5. E. K. Miller and G. J. Burke, “Time-Domain Far-Field Analysis of Radiation Sources,” IEEE International Symposium on Antennas and Propagation, Salt Lake City, Utah, July, 2000.

6. G. S. Smith and T. W. Hertel, “On the Radiation of Energy from Simple Current Distributions,” to appear in IEEE Antennas and Propagation Magazine. 7. E. K. Miller, “PCs for AP and Other EM Reflections,” IEEE Antennas and Propagation Society Newsletter, 29, 5, October, 1987, pp. 31-33; 29, 6, December, 1987, pp. 31-33; 31, 5, October, 1989, pp. 34-39; IEEE Antennas and Propagation Magazine, 32, 1, February, 1990, pp. 36-40; 32, 4, August, 1990, pp. 45-48; 33, 5, October, 1991, pp. 59-63; 36, 5, October, 1994, 54-56; 37, 6, December, 1995, pp. 89-92; 38, 4, August, 1996, pp. 66-70; 39, 6 , December, 1997, pp. 82-87; 40,4, August, 1998, pp. 90-93. 8. E. K. Miller, “Characterization, Comparison, and Validation of Electromagnetic Modeling Software,” ACES Journal (special issue on Electromagnetics Computer Code Validation), 1989, pp. 8-24.

11111111111111111111llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll

98