A particle confined to an impassable box is a paradigmatic and exactly solvable one-dimensional quantum system modeled by an infinite square well potential. Here, we explore some of its infinitely many generalizations to two dimensions, including particles confined to rectangle-, ellipse-, triangle-, and cardioid-shaped boxes using physics-informed neural networks. In particular, we generalize an unsupervised learning algorithm to find the particles’ eigenvalues and eigenfunctions, even in cases where the eigenvalues are degenerate. During training, the neural network adjusts its weights and biases, one of which is the energy eigenvalue, so that its output approximately solves the stationary Schrödinger equation with normalized and mutually orthogonal eigenfunctions. The same procedure solves the Helmholtz equation for the harmonics and vibration modes of waves on drumheads or transverse magnetic modes of electromagnetic cavities. Related applications include quantum billiards, quantum chaos, and Laplacian spectra.

Artificial intelligence has impacted our culture and livelihood dramatically since the turn of the millennium, from smartphones to chatbots to self-driving cars. Scientists have recently begun using machine learning techniques to not only improve our understanding of the world around us but change the way we approach scientific and computational methods, including those methods used to solve the fundamental differential equations that model physical phenomena in our world.

In previous work, physics-informed neural networks have been used to solve classical physics problems, with Lagrangian and Hamiltonian formalisms, for both ordered and chaotic dynamics.1–3 On a more fundamental level, methods have been developed to search for symmetries,4 conservation laws,5 and invariants within such dynamical systems.6 Furthermore, studies have been conducted on specific problems such as heat transfer,7 irreversible processes,8 and energy-dissipating systems.9 These techniques have even been applied to quantum systems.10–13 In this study, we use physics-informed neural networks to solve the quantum eigenvalue problem for particles confined to impassable planar boxes of diverse shapes.

Our work extends that of Jin, Mattheakis, and Protopapas,12,13 who used neural networks to solve the one-dimensional quantum eigenvalue problem for a small number of systems. Here, we generalize and innovate the JMP algorithm to two dimensions and find the eigenvalues and eigenfunctions of the stationary Schrödinger differential equation with Dirichlet boundary conditions in two-dimensional regions that exhibit integrable, ergodic, or chaotic classical billiard dynamics, including irrational triangles and cardioids. Our extension readily handles degenerate energy levels.

A key feature of JMP (pronounced “jump”) is the use of unsupervised learning: during training, the neural network is not given the Schrödinger equation’s solution, which need not be available; instead, the network adjusts its parameters to approximately satisfy the Schrödinger equation and the orthonormality of its solutions. This characteristic showcases the natural ability of physics-informed neural networks to find solutions to problems that may not be solvable analytically or numerically by other methods. Extending this work to multi-dimensional systems advances both machine learning and physics by broadening the usefulness of neural networks and increasing the ways scientists can solve problems.

Inspired by mammalian brains, conventional feed-forward neural networks are nested nonlinear functions that depend on many parameters called weights wnml and biases bnl. Training adjusts the weights and biases to approximate the desired outcomes.

Given a nonlinear neuron activation function such as σ[z] = tanh z, training recursively updates the neuron activities anl=σ[znl] according to their neuronal inputs,
(1)
for neuron 1 ≤ nNl of layer 2 ≤ lL, where the network inputs and outputs are Im=am1 and Om=amL, respectively. As described in  Appendix A, stochastic gradient descent adjusts the weights and biases to minimize a loss (objective, cost, or error) function such as
(2)
where Ôs are the desired outputs. Physics-informed neural networks will often add some physics intuition to the loss function by, for example, rewarding a dynamical system for Hamiltonian flow.14,15
Here, we extend and reformulate the JMP algorithm from one to two dimensions. The neural network has two hidden layers of neurons with sinusoidal activation functions σ(z) = sin z (More traditional activations such as sigmoids also work, as in our  Appendix C example.). Inputs are positions {x, y} and the constant 1, where the constant is converted to the energy eigenvalue E=W111 by the adjustable weight W111, as shown in Fig. 1. The output is an incomplete eigenfunction ψE(x, y), which is multiplied by a function B(x, y) that vanishes on the box’s perimeter ∂Ω to enforce the boundary conditions and generate the complete eigenfunction
(3)
FIG. 1.

One step of a neural network gradient descent to a second eigenfunction ΨEΨ2 given a first eigenfunction Ψ1 (left column). Arrows represent weights wnml, and circles represent biases bnl. In practice, much of the computation involves reverse-mode automatic differentiation. Application to a particle confined to a cardioid-shaped box (right column). Classical billiards orbits are chaotic (top right), while the quantum ground state and first excited state are smooth (middle right). Plateaus in the energy plot indicate eigenvalues, and the addition of the orthogonality loss term forces the wavefunction to morph from Ψ1 to Ψ2 as the energy jumps from E1 to E2 (bottom right).

FIG. 1.

One step of a neural network gradient descent to a second eigenfunction ΨEΨ2 given a first eigenfunction Ψ1 (left column). Arrows represent weights wnml, and circles represent biases bnl. In practice, much of the computation involves reverse-mode automatic differentiation. Application to a particle confined to a cardioid-shaped box (right column). Classical billiards orbits are chaotic (top right), while the quantum ground state and first excited state are smooth (middle right). Plateaus in the energy plot indicate eigenvalues, and the addition of the orthogonality loss term forces the wavefunction to morph from Ψ1 to Ψ2 as the energy jumps from E1 to E2 (bottom right).

Close modal
The stationary Schrödinger equation is the eigenvalue problem
(4)
where the Hamiltonian in natural units
(5)
includes the potential energy function V(x, y). The error or loss function
(6)
encourages the discovery of eigenfunctions that are normalized and orthogonal to already discovered solutions. For example, when finding a second eigenfunction ΨEΨ2 given a first eigenfunction Ψ1, the loss
(7)
where the scalar product
(8)
and the norm squared
(9)
Without loss of generality, we take the eigenfunctions to be real, Ψ* = Ψ and Φ* = Φ, and approximate the integrals as sums, so
(10)
and
(11)
where xm = mδx = mΔx/M and yn = nδy = mΔy/N for the potential well Ω ⊂ [0, Δx] × [0, Δy], and the overbars indicate averages.

With relative loss weights λN = λO = 1, minimizing the differential equation loss LD enforces the stationary Schrödinger equation = , minimizing the normalization loss LN discourages trivial zero solutions, and minimizing the orthogonal loss LO encourages independent solutions. Learning continues until the total loss L and the differential equation loss LD and its rate of change are all small.  Appendix B discusses alternate versions of the normalization loss.  Appendix C discusses our implementation details.

We consider potential energy functions of the form
(12)
where Ω is the interior of the potential well and ∂Ω is its boundary. We study particles trapped in rectangle-, ellipse-, triangle-, and cardioid-shaped potential wells. The rectangle eigenfunctions involve sinusoids, and the elliptic eigenfunctions involve Mathieu functions,16,17 but the triangle and cardioid eigenfunctions are more complicated.

From the classical billiards perspective, rectangle and ellipse-shaped boxes are non-ergodic and integrable, while cardioid-shaped boxes (with a polar coordinate boundary r = 1 − δ sin θ) have mixed phase spaces when convex (0 ≤ δ < 1/2) and are ergodic, mixing, and chaotic when concave (1/2 < δ ≤ 1). Triangle-shaped boxes whose angles are irrational multiples of π are ergodic and mixing but not chaotic, but triangle-shaped boxes with one or more angles that are rational multiples of π may not even be ergodic.18,19 From a spectral analysis perspective, for fixed Dirichlet boundary conditions, the only triangles with explicitly known Laplace spectra are the equilateral (60°–60°–60°), isosceles right (45°–45°–90°), and hemi-equilateral (30°–60°–90°) triangles.20 We expect the energy eigenvalue spacings of the integrable rectangle and ellipse-shaped boxes to be distributed according to Poisson statistics and the eigenvalue spacings of the chaotic cardioid-shaped boxes to obey Gaussian Orthogonal Ensemble (GOE) statistics,21,22 with the spectral statistics of triangles somewhere in between.18 

We construct a fully connected feed-forward neural network with 1 + 2 inputs, two hidden layers containing 100 neurons each, and one output. To train the neural network, we generate 100 random {x, y} points within the box and feed them into the network according to the algorithm outlined in Fig. 1. We typically repeat for 9 × 104 training rounds (or epochs).

The neural network adjusts its weights and biases to converge to the ground-state energy. Adding the orthogonality term LO to the loss function causes the neural network to leave the ground-state energy plateau and rise to the energy associated with the first-excited-state (assuming the ground state is non-degenerate). Similarly, the network finds higher excited states.

Table I and Fig. 2 summarize the results for the first three energy eigenvalues and eigenfunctions for a quantum particle confined to boxes shaped similar to rectangles (simply solvable), ellipses (classically integrable), irrational triangles (classically ergodic), and circles (quantumly degenerate). The eigenfunction color palette stretches from fully saturated red (for Ψ > 0) to fully saturated blue (for Ψ < 0) via completely unsaturated white (for Ψ = 0). The reference energies listed in Table I are numerically computed in Mathematica23 (and checked exactly for the rectangle). Energy plateaus correspond to eigenvalues. The energy eigenvalue uncertainty is the wiggle of the energy plateau as it rings down to its mean value, and the relative error ΔE/E* is the estimated energy minus the reference energy divided by the reference energy. All 2D JMP eigenvalues are within 1% of the reference values. Insets are classical billiards orbits.

TABLE I.

Energy eigenvalue examples. Boundary functions B(x, y), reference eigenvalues En* computed in Mathematica,23 neural network eigenvalue approximations En, and relative errors ΔEn/En*.

B(x, y)En*EnΔEn/En* (%)
Rectangle 5.99 ± 0.01 −0.17 
xa1xayb1yb 12 12.0 ± 0.02 0.00 
where a = π/2, b=π/2 18 18.0 ± 0.03 0.00 
Ellipse 4.32 4.32 ± 0.02 0.00 
1xa2yb2 9.13 9.11 ± 0.02 −0.22 
where a = 1, b=2 12.8 12.8 ± 0.01 0.00 
Circle 5.78 5.80 ± 0.04 +0.35 
1xa2yb2 14.7 14.6 ± 0.05 −0.68 
where a = 1, b = 1 14.7 14.6 ± 0.05 −0.68 
Triangle 4.09 4.09 ± 0.00 0.00 
1xaybxatanθyb 8.00 7.98 ± 0.06 −0.25 
where a = 4, b = 4, θ=π/23 10.8 10.8 ± 0.04 0.00 
B(x, y)En*EnΔEn/En* (%)
Rectangle 5.99 ± 0.01 −0.17 
xa1xayb1yb 12 12.0 ± 0.02 0.00 
where a = π/2, b=π/2 18 18.0 ± 0.03 0.00 
Ellipse 4.32 4.32 ± 0.02 0.00 
1xa2yb2 9.13 9.11 ± 0.02 −0.22 
where a = 1, b=2 12.8 12.8 ± 0.01 0.00 
Circle 5.78 5.80 ± 0.04 +0.35 
1xa2yb2 14.7 14.6 ± 0.05 −0.68 
where a = 1, b = 1 14.7 14.6 ± 0.05 −0.68 
Triangle 4.09 4.09 ± 0.00 0.00 
1xaybxatanθyb 8.00 7.98 ± 0.06 −0.25 
where a = 4, b = 4, θ=π/23 10.8 10.8 ± 0.04 0.00 
FIG. 2.

2D JMP results for simply solvable, classically integrable, classically ergodic, and quantumly degenerate quantum particles confined to rectangle-, ellipse-, triangle-, and circle-shaped boxes. Red-white-blue contours represent eigenfunctions, and energy plateaus are at eigenvalues. Insets show classical billiard orbits. For the circular degenerate case, where E2 = E3, orthogonality conditions enable the network to find both eigenfunctions Ψ2Ψ3.

FIG. 2.

2D JMP results for simply solvable, classically integrable, classically ergodic, and quantumly degenerate quantum particles confined to rectangle-, ellipse-, triangle-, and circle-shaped boxes. Red-white-blue contours represent eigenfunctions, and energy plateaus are at eigenvalues. Insets show classical billiard orbits. For the circular degenerate case, where E2 = E3, orthogonality conditions enable the network to find both eigenfunctions Ψ2Ψ3.

Close modal

Smaller boxes have higher energy states as the momentum p ∝ 1/λ implies the energy E = p2/2m ∝ 1/λ2, where λ is the quantized wavelength. Thus, adjusting the scaling parameters a, b listed in Table I can keep the energy eigenvalues in a convenient numerical range. Figure 3 compares our boxes to scale.

FIG. 3.

Relative sizes of test boxes.

FIG. 3.

Relative sizes of test boxes.

Close modal

The neural network implementing the 2D JMP algorithm successfully approximates the first three energy eigenvalues and eigenfunctions for particles confined to a wide range of boxes without assuming or imposing the boxes’ symmetries. Higher-order excited states can be obtained similarly by adding further orthogonality terms to the loss function. Additional training can increase accuracy. Especially impressive is the network’s discovery of the circle-shaped box’s second and third states Ψ2Ψ3 despite their degenerate energies E2 = E3 due to the onset of the orthogonality constraints ⟨Ψ3|Ψ1⟩ = 0 and ⟨Ψ3|Ψ2⟩ = 0 in the loss LO.

Related applications include acoustic and electromagnetic cavities and Laplacian spectra. The stationary Schrödinger equation can be written as
(13)
Inside the hard-walled infinite wells, V = 0 and
(14)
This is the same as the Helmholtz equation
(15)
which can describe waves on membranes with clamped edges in two dimensions24 and electromagnetic waves in conducting cavities in three dimensions.25,26 A special case is the Laplace equation
(16)
which is central to the mathematical problem of Laplacian eigenvalues for planar domains with Dirichlet boundary conditions.20 

Computers and the algorithms they instantiate are increasingly important in science, technology, and society, from spaceflight27,28 to cryptography.29 We have demonstrated that the JMP algorithm, when extended to two dimensions, enables neural networks to solve the stationary Schrödinger equation and find quantum energy eigenvalues and eigenfunctions for both classically regular and irregular billiards systems. Such capability is another example of physics-informed machine learning mastering dynamical systems that exhibit both order and chaos.1 

This success is proof-of-concept that simple feed-forward neural networks, incorporating physics intuition in their loss functions, can solve complicated eigenvalue problems. Well-established state-of-the-art numerical methods18 are currently faster or more accurate, but 2D JMP neural networks illustrate the quantum potential of machine learning. Future work includes exploiting spatial symmetries and focused sampling to reduce the number of training points and generalizing to continuous and three dimensional potential wells.

This research was supported by Office of Naval Research Grant No. N00014-16-1-3066 and a gift from the United Therapeutics Corporation.

The authors have no conflicts to disclose.

E.G.H. generalized the neural network code and acquired all the data. J.F.L. generated reference results, created the boundary functions, and finalized figures. W.L.D. supervised the work. All authors contributed to the manuscript.

Elliott G. Holliday: Conceptualization (equal); Formal analysis (equal); Investigation (lead); Methodology (equal); Software (lead); Validation (equal); Visualization (supporting); Writing – original draft (equal); Writing – review & editing (supporting). John F. Lindner: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Validation (equal); Visualization (lead); Writing – original draft (equal); Writing – review & editing (lead). William L. Ditto: Conceptualization (equal); Funding acquisition (lead); Project administration (lead); Resources (lead); Supervision (lead); Writing – review & editing (supporting).

The data that support the findings of this study are available within the article.

1. Dynamical analog

Gradient descent of a neural network weight w is similar to a point particle of mass m at position x sliding with viscosity γ on a potential energy surface V(x). Newton’s laws imply
(A1)
where the overdots indicate time differentiation. For large viscosity |mẍ||γẋ| and
(A2)
so the velocity
(A3)
The position evolves similar to the Euler update
(A4)
With position x = w, height V = L, and learning rate η = dt/γ, a neural network weight evolves such that
(A5)
and similarly for a bias. Increasing the learning rate makes the loss surface more “slippery,” while decreasing the learning rate makes the loss surface more “sticky.” Variable learning rates may expedite gradient descent to a global minimum. Model stochastic gradient descent by buffeting the sliding particle with noise.

2. Newton’s method

Alternately, the minima of the loss function L(w) are sought by seeking the roots of its derivative L′(w) according to the Newton–Raphson method by extending the tangent to the intercept and stepping
(A6)
If a minimum at wm is approximately quadratic,
(A7)
Then the nearby Newton’s method reduces to
(A8)
where the learning rate η = 1/a = 1/L″(w) is inverse to the curvature.
For the normalization loss, Jin et al.13 proposed
(B1)
where c = (Mx)2 and |Ψ=|Φ/M/Δx. However, an alternative loss is
(B2)
The latter is arguably simpler, while the related function
(B3)
is problematic because it can be positive or negative.

The Fig. 4  Python sample code implements a simple neural network with sigmoid activation functions that learns the ground state energy eigenvalue and eigenfunction of a particle in a one-dimensional box Ω = [0, 1]. Our PyTorch machine-learning library implementation uses tensors (multidimensional rectangular arrays of numbers) throughout.

FIG. 4.

Example Python code of neural network learning to model a particle in a one-dimensional box Ω = [0, 1].

FIG. 4.

Example Python code of neural network learning to model a particle in a one-dimensional box Ω = [0, 1].

Close modal

ClassNet (lines 10–32) defines the network architecture (lines 14–19), implements a forward pass (lines 21–32), and enforces the Dirichlet boundary conditions (line 31). After instantiating an object of the class object_net and initializing the stochastic gradient descent optimizer (lines 34–35), variable x is a list or columnar array of equally spaced positions inside the box with autograd tracking its operations (lines 37–38).

The for-loop (lines 40–54) manages the neural network training, first shuffling the x values (line 41) and then asking the object_net for the latest eigenfunction and energy eigenvalue approximations (line 42). The grad function invokes autograd to compute the Laplacian (lines 44–46). The .pow() and .mean() methods help compute the loss function (lines 48–50).

The .backward() method also invokes autograd and computes the gradients of the loss with respect to weights and biases, which are then updated by the optimizer (lines 52–54). More generally, the .backward() method computes the .grad attribute of all tensors that have requires_grad = True in the computational graph, the loss of which is the final leaf and the inputs are the roots; then the optimizer iterates through the list of weight and bias parameters it received when initialized, and wherever a tensor requires grad = True, it subtracts the value of its gradient (multiplied by the learning rate) stored in its .grad property.

Final the energy eigenvalue and eigenfunction are extracted as numbers and printed (lines 56–58). This working example returns a ground state energy within about 1% of the exact value.

1.
A.
Choudhary
,
J. F.
Lindner
,
E. G.
Holliday
,
S. T.
Miller
,
S.
Sinha
, and
W. L.
Ditto
, “
Physics-enhanced neural networks learn order and chaos
,”
Phys. Rev. E
101
,
062207
(
2020
).
2.
M.
Finzi
,
K. A.
Wang
, and
A. G.
Wilson
, “
Simplifying Hamiltonian and Lagrangian neural networks via explicit constraints
,” arXiv:2010.13581 (
2020
).
3.
M.
Mattheakis
,
D.
Sondak
,
A. S.
Dogra
, and
P.
Protopapas
, “
Hamiltonian neural networks for solving equations of motion
,”
Phys. Rev. E
105
,
065305
(
2022
).
4.
R.
Bondesan
and
A.
Lamacraft
, “
Learning symmetries of classical integrable systems
,” arXiv:1906.04645 (
2019
).
5.
Z.
Liu
and
M.
Tegmark
, “
Machine learning conservation laws from trajectories
,”
Phys. Rev. Lett.
126
,
180604
(
2021
).
6.
S. J.
Wetzel
,
R. G.
Melko
,
J.
Scott
,
M.
Panju
, and
V.
Ganesh
, “
Discovering symmetry invariants and conserved quantities by interpreting Siamese neural networks
,” arXiv:2003.04299 (
2020
).
7.
S.
Cai
,
Z.
Wang
,
S.
Wang
,
P.
Perdikaris
, and
G. E.
Karniadakis
, “
Physics-informed neural networks for heat transfer problems
,”
J. Heat Transfer
143
,
060801
(
2021
).
8.
K.
Lee
,
N. A.
Trask
, and
P.
Stinis
, “
Machine learning structure preserving brackets for forecasting irreversible processes
,” arXiv:2106.12619 (
2021
).
9.
Y. D.
Zhong
,
B.
Dey
, and
A.
Chakraborty
, “
Dissipative SymODEN: Encoding Hamiltonian dynamics with dissipation and control into deep learning
,” arXiv:2002.08860 (
2020
).
10.
H.
Cao
,
F.
Cao
, and
D.
Wang
, “
Quantum artificial neural networks with applications
,”
Inf. Sci.
290
,
1
6
(
2015
).
11.
M.
Raissi
,
P.
Perdikaris
, and
G. E.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
707
(
2019
).
12.
H.
Jin
,
M.
Mattheakis
, and
P.
Protopapas
, “
Unsupervised neural networks for quantum eigenvalue problems
,” in
2020 NeurIPS Workshop on Machine Learning and the Physical Sciences, NeurIPS
,
2020
.
13.
H.
Jin
,
M.
Mattheakis
, and
P.
Protopapas
, “
Physics-informed neural networks for quantum eigenvalue problems
,” in
IJCNN at IEEE World Congress on Computational Intelligence
,
2022
.
14.
S.
Greydanus
,
M.
Dzamba
, and
J.
Yosinski
, “
Hamiltonian neural networks
,” arXiv:1906.01563 (
2019
).
15.
P.
Toth
,
D. J.
Rezende
,
A.
Jaegle
,
S.
Racanière
,
A.
Botev
, and
I.
Higgins
, “
Hamiltonian generative networks
,” arXiv:1909.13789 (
2019
).
16.
E.
Mathieu
, “
Mémoire sur le mouvement vibratoire d’une membrane de forme elliptique
,”
J. Math. Pures Appl.
13
,
137
203
(
1868
).
17.
S.
Chakraborty
,
J. K.
Bhattacharjee
, and
S. P.
Khastgir
, “
An eigenvalue problem in two dimensions for an irregular boundary
,”
J. Phys. A: Math. Theor.
42
,
195301
(
2009
).
18.
Č.
Lozej
,
G.
Casati
, and
T.
Prosen
, “
Quantum chaos in triangular billiards
,”
Phys. Rev. Res.
4
,
013138
(
2022
).
19.
K.
Zahradova
,
J.
Slipantschuk
,
O. F.
Bandtlow
, and
W.
Just
, “
Impact of symmetry on ergodic properties of triangular billiards
,”
Phys. Rev. E
105
,
L012201
(
2022
).
20.
B. J.
McCartin
, “
On polygonal domains with trigonometric eigenfunctions of the Laplacian under Dirichlet or Neumann boundary conditions
,”
Appl. Math. Sci.
2
,
2891
2901
(
2008
).
21.
G.
Casati
,
F.
Valz-Gris
, and
I.
Guarnieri
, “
On the connection between quantization of nonintegrable systems and statistical theory of spectra
,”
Lett. Nuovo Cimento
28
,
279
282
(
1980
).
22.
O.
Bohigas
,
M. J.
Giannoni
, and
C.
Schmit
, “
Characterization of chaotic quantum spectra and universality of level fluctuation laws
,”
Phys. Rev. Lett.
52
,
1
4
(
1984
).
23.
Wolfram Research, Inc.
, Mathematica, Version 13.1,
Champaign, IL
,
2022
.
24.
J. W. S.
Rayleigh
,
Vibrations of Membranes
(
Dover
,
1945
), pp.
306
351
.
25.
H.-J.
Stöckmann
and
J.
Stein
, “‘
Quantum’ chaos in billiards studied by microwave absorption
,”
Phys. Rev. Lett.
64
,
2215
2218
(
1990
).
26.
J. U.
Nöckel
and
A.
Douglas Stone
, “
Ray and wave chaos in asymmetric resonant optical cavities
,”
Nature
385
,
45
47
(
1997
).
27.
K.
Shi
,
C.
Liu
,
Z.
Sun
, and
X.
Yue
, “
Coupled orbit-attitude dynamics and trajectory tracking control for spacecraft electromagnetic docking
,”
Appl. Math. Modell.
101
,
553
(
2021
).
28.
C.
Liu
,
X.
Yue
,
J.
Zhang
, and
K.
Shi
, “
Active disturbance rejection control for delayed electromagnetic docking of spacecraft in elliptical orbits
,”
IEEE Trans. Aerosp. Electron. Syst.
58
,
2257
2268
(
2022
).
29.
R. L.
Rivest
,
A.
Shamir
, and
L.
Adleman
, “
A method for obtaining digital signatures and public-key cryptosystems
,”
Commun. ACM
21
,
120
126
(
1978
).