20th century top ten algorithm

zhaozj2021-02-12  144

BBS Shuimu Tsinghua Station - Essence Article Reading

-------------------------------------------------- ------------------------------ Sender: Rao (around the interview with god), the letter area: Numcomp Title : [Collection] Ten Algorithm in the 20th Century (English, which heroes translated) Send a letter: BBS Shuimu Tsinghua Station (WED JUL 16 01:44:49 2003),

☆ ───────────────────────────────────────────────────────────────── 12:29:47 2003) mentioned:

BY Barry A. Cipra

Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is the root for algorithm, which stems instead from al-Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr wa'l muqabalah devolved into today's high schoolalgebra textbooks. Al-Khwarizmi stressed the importance of methodical procedures for solving problems. Were he around today, he'd no doubt be impressed by the advances in his eponymous approach.Some of the very best algorithms of the computer age are highlighted in the January / February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Guest editors Jack Don-garra of the University of Tennessee and Oak Ridge National Laboratory and Fran-cis sullivan of the Center for Comput-ing Sciences at the Institute for Defense Analyses put togeth-er a list they call the "Top Ten Algorithms of the Century." "We tried to assemble the 10 al-gorithms with the greatest influenc e on the development and practice of science and engineering in the 20th century, "Dongarra and Sullivan write. As with any top-10 list, their selections-and non-selections-are bound to be controversial, they acknowledge. When it comes to picking the algorithmic best, there seems to be no best algorithm.Without further ado, here's the CiSE top-10 list, in chronological order. (Dates and names associated with the algorithms should be read as first-order approximations. Most algorithms take shape Over Time, With Many Contributors.)

1.1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolisalgorithm, also known as the Monte Carlo method.The Metropolis algorithm aims to obtain approximate solutions to numerical problems with unmanageably many degrees of freedomand to combinatorial problems of factorial size, by mimicking a random process Given the digital computer's reputation fordeterministic calculation, it's fitting that one of its earliest applications was the generation of random numbers.2.1947:. George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.In terms of widespread application, Dantzig's algorithm is one of the most successful of all time: Linearprogramming dominates the world of industry, where economic survival depends on the ability to optimizewithin budgetary and other constraints (of course, the ". Real "Problems of Industry Are Off; Useof Linear Programming IS Sometimes Dich y the computational budget.) The simplex method is an elegantway of arriving at optimal answers. Although theoretically susceptible to exponential delays, the algorithmin practice is highly efficient-which in itself says something interesting about the nature of computation.In terms of widespread use, George Dantzig's Simplex Method Is Among The MOST SUCCESSFUL Algorithms of All Time.

3.1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysisat the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.These algorithms address the seemingly simple task of solving equations of the form Ax = b ................................................................................................................................................................................................... ... -such as solving equations ofthe form Kxi 1 = Kxi b -. Axi with a simpler matrix K that's ideally "close" to A-lead to the study of Krylov subspaces Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial "remainder" vector r0 = b -.. Ax0 Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is ​​symmetric Hestenes and Stiefel proposed an even niftier method, known as the Conjugate Grad ient method, for systems that are both symmetric and positive definite. Over the last 50 years, numerous researchers have improved and extended these algorithms.The current suite includes techniques for non-symmetric systems, with acronyms like GMRES and Bi-CGSTAB. (GMRES Andbi-cgstab premiered in Siam Journal on Scientific and Statistical Computing, in 1986 and 1992, respective.)

4.1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approachto matrix computations.The ability to factor matrices into triangular, diagonal, orthogonal, and other special forms has turnedout to be extremely useful The decompositional approach has enabled software developers to produceflexible and efficient. matrix packages. It also facilitates the analysis of rounding errors, one of the bigbugbears of numerical linear algebra. (in 1961, James Wilkinson of the National Physical Laboratory inLondon published a seminal paper in the Journal of the ACM, titled "Error Analysis of Direct Methodsof Matrix Inversion, "based on the LU decomposition of a matrix as a product of lower and uppertriangular factors) 5.1957:. John Backus leads a team at IBM in developing the Fortran optimizing compiler.The creation of Fortran may rank as the single most important Event in The History of Computer Programming: Finally, Scientists (AND Others) Could Tell The Computer what they wanted it to do, without having to descend into the netherworld of machine code.Although modest by modern compiler standards-Fortran I consisted of a mere 23,500 assembly-language instructions-the earlycompiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history ofFortran I, II, and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler "produced code of such efficiency that its output would startle the programmers who studied it."

6.1959-61: JGF Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm.Eigenvalues ​​are arguably the most important numbers associated with matrices-and they can be the trickiest to compute It'srelatively. easy to transform a square matrix into a matrix that's "almost" upper triangular, meaning one with a single extra set of nonzero entries just below the main diagonal. But chipping away those final nonzeros, without launching an avalanche of error, is nontrivial. The QR algorithm is just the ticket. Based on the QR decomposition, which writes A as the product of an orthogonal matrix Q and an upper triangular matrix R, this approach iteratively changes Ai = QR into Ai 1 = RQ, with a few bells and whistles for accelerating convergence to upper triangular form By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations.7.1962:. Tony Hoare of Elliott Brothers, Ltd., London, presents Qu icksort.Putting N things in numerical or alphabetical order is mind-numbingly mundane The intellectual challenge lies in devising ways of doing so quickly Hoare's algorithm uses the age-old recursive strategy of divide and conquer to solve the problem:.. Pick one element as a "pivot," Separate the rest "," BIG "and" small "Elements (As Compared with the pivot), and the repeat this procedure on each pile. Although it's Possible to get stuck doing all n (n - 1) / 2 comparisons (especially if you use as your pivot the first item on a list that's already sorted!), Quicksort runs on average with O (N log N) efficiency. Its elegant simplicity has made Quicksort the pos-terchild of computational complexity.

8.1965:. James Cooley of the IBM TJ Watson Research Center and John Tukey of Princeton University and AT & T Bell Laboratories unveil the fast Fourier transform Easily the most far-reaching algo-rithm in applied mathematics, the FFT revolutionized signal processing The underlying idea goes. back to Gauss (who needed to calculate orbits of asteroids), but it was the Cooley-Tukey paper that made it clear how easily Fourier transforms can be computed. Like Quicksort, the FFT relies on a divide-and-conquer strategy to reduce an ostensibly O (N2) chore to an O (N log N) frolic. But unlike Quick- sort, the implementation is (at first sight) nonintuitive and less than straightforward. This in itself gave computer science an impetus to investigate the inherent complexity of Computational Problems And Algorithms.

9.1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm.The problem is an old one: Given a bunch of real numbers, say x1, x2,, xn, are there integers a1, a2,... ,............................................................ If x1 / x2 is rational, the expansion terminates and, with proper unraveling, gives the "smallest" integers a1 and a2.If the Euclidean algorithm does not terminate-or if you simply get tired of computing it-then the unraveling procedure at least provides lower bounds on the size of the smallest integer relation. Ferguson and Forcade's generalization, although much more difficult to implement (and to understand), is also more powerful. Their detection algorithm, for example, has been used to find the precise coefficients Of the polynomials satisfied by the third and four on points, B3 = 3.544090 and B4 = 3.564407, of the logistic map (The latter polynomial is of degree 120; its largest coefficient is 25730.). It has also proved useful in simplifying calculations with Feynman diagrams in quantum field theory.

10.1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm.This algorithm overcomes one of the biggest headaches of N-body simulations: the fact that accurate calculations of the motions of N particles interacting via gravitational or electrostatic forces (think stars in a galaxy, or atoms in a protein) would seem to require O (N2) computations-one for each pair of particles. The fast multipole algorithm gets by with O (N) computations. It does so by using multipole expansions (net charge or mass, dipole moment, quadrupole, and so forth) to approximate the effects of a distant group of particles on a local group. A hierarchical decomposition of space is used to define ever-larger groups as distances increase.One of the distinct advantages of The Fast Multipole Algorithm Is That It Comes Equipped with Rigorous Error Estimates, a Feature That Many Methods Lack.What New Insights And Algorithms Will The 21st Century Bring? The completion answer obviou sly will not be known for anotherhundred years. One thing seems certain, however. As Sullivan writes in the introduction to the top-10 list, "The new century is not going to be very restful for us, but it is not going to Be Dull Either! "

☆ ───────────────────────────────────────────────── (Sat Jul 12 14: 26:12 2003) mentioned:

I will translate the last one. 10.1987: L.GreenGard and v.rokhlin invented a fast multi-pole algorithm. This algorithm overcomes one of the largest bottlenecks in multi-particle simulation: precisely calculates the interaction between n particles through the interaction of gravitational or static power (such as a star in the galaxy, or atoms in the protein) requires O (N2). level. The fast multi-pole algorithm reached an order of o (n). This algorithm is approximately distant particle group to the proximal particle group by multi-pole deployment (particles or mass, dipole chis, tetrazole, etc.). A recursive decomposition is used to describe a larger group that is increased. One of the significant advantages of fast multi-pole algorithm is that it can adjust accuracy anything, this feature is a lot of other methods. [In the masterpiece of NECO (I am a cat):]: by Barry A. Cipra: Algos Is The Greek Word for pain. Algor is Latin, To Be Cold. Neither IS: The Root for Algorithm, Which Stems Instead from al-: Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr: wa'l muqabalah devolved into today's high school: algebra textbooks Al-Khwarizmi stressed the importance of methodical:. procedures for solving problems Were he around. today ,: he'd no doubt be impressed by the advances in his eponymous approach .: Some of the very best algorithms of the computer age are highlighted: in the January February 2000 issue of Computing in Science & /: Engineering, a joint publication Of The American Institute of Physics: ................. ☆ - ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ────────────── ☆ JUMPING (Man in Experiments) (SAT JUL 12 14:44:20 2003) mentioned:

The birth of Fortran also became an algorithm, huh, huh: P [mention in the masterpiece of NECO (I am cat):】: by Barry A. Cipra: Algos Is The Greek Word for Pain. Algor is Latin, To Be Cold . Neither is: the root for algorithm, which stems instead from al-: Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr: wa'l muqabalah devolved into today's high school:. algebra textbooks Al-Khwarizmi stressed the importance of methodical: procedures for solving problems Were he around today ,: he'd no doubt be impressed by the advances in his eponymous approach .: Some of the very best algorithms of the computer age are highlighted:. in the January / February 2000 Issue of Computing in Science &: Engineering, a Joint Publication of the American Institute of Physics: ................. ☆ - ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ────────────────── ☆ zhangby (friend) in (Sat Jul 12 15:04:28 2003) mentioned:

This algorithm has a downloaded C Thanks [mentioned in the masterronaldo (wxyw) masterpiece:]: I translated the last one. : 10 .: 1987: L.GreenGard and v.rokhlin invented the fast multipole algorithm. : This algorithm overcomes one of the largest bottlenecks in multi-particle simulation: precisely calculates that n particles are attractive or quiet: electrical interactions (such as mitigants in galaxies, or atoms in proteins) need O (N2) Placement. Fast Multipole: The algorithm reaches the order of O (n). Such an algorithm uses a multi-pole deployment (particle or mass of space, dipole, four retroform: sub, etc.) to the proximal particle group to the proximal particle group. A recursive decomposition is used to describe a larger group with a distance increase. : One of the significant advantages of fast multi-pole algorithm is that it can adjust accuracy any, this feature is a lot of other methods:.

☆ ───────────────────────────── ☆ NOFD (away from FD) (SAT JUL 12 15 : 59:22 2003) mentioned:

These top ten algorithms seem to be recommended from editing, guests report (main) the original literature is discussed, the translation is not necessary?

[In the masterpiece of NECO (I am a cat):】: by Barry A. Cipra:: algos is The Greek Word for pain. Algor is Latin, To Be Cold. Neither is: The root for algorithm, Which Stems INSTEAD from al-: Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr: wa'l muqabalah devolved today's high school into: algebra textbooks Al-Khwarizmi stressed the importance of methodical:.. procedures for solving problems Were he Around Today ,: He'd No Doubt Be Implessed by The Advances in His Eponymous Approach .: Some of The Very Best Algorithms of The Computer Age Highlight: ............... ☆ ───────────────────────────────────────── ─ ─ ☆ NOFD (Standing from FD) (Sat Jul 12 16:02:37 2003) mentioned:

It is the basis for the achievement and evaluation of the algorithm, which is huge in NA, and is well deserved.

[Mentioned in the masterpiece of Jumping (Man in Experiments):]: The birth of Fortran also became an algorithm, huh, huhe: P

☆ ─────────────────────── ─ ☆ superronaldo (wxyw) in (SAT JUL 12 17: 01:30 2003) mentioned:

Didn't hear that there is ready-made O (N), and different problem kernel functions are different. [In the masterpiece of zhangby (friends):]: This algorithm has downloaded: c: Thank you

☆ ───────────────────────────── ☆ weck (parallel calculation) in (SAT JUL 12 20 : 00: 04 2003) He mentioned:

Multi-layer fast multi-grade theory can be hit O (nlogn) multi-grade kissa to reach O (N ^ 1.5)

[Mentioned in the masterpiece of SuperronalDo (WxywW):]: Nothing to say that there is ready-made O (N), and different problem kernel functions are different.

☆ ─────────────────────────────────────────────────── (Sat Jul 12 21: 57:48 2003) mentioned:

O (n ^ 1.5) is a Helmholtz equation for electromagnetic problems, for the Laplace equation, the Stocks equation, and the elastic mechanical equation, which is an O (n) order, and two versions.

[In the masterpiece of WECK (parallel calculations):]: Multi-layer fast multi-grade theory can be hit O (nlogn): Multi-grade can reach O (n ^ 1.5)

☆ ─────────────────────────────────────────────────── (Sat Jul 12 21: 58:43 2003) mentioned:

Google, more. [In the masterpiece of ZHANGBY (friends):]: Is there a paper?

-------------------------------------------------- ------------------------------[return to previous page]

转载请注明原文地址:https://www.9cbs.com/read-7521.html

New Post(0)