An algorithm is a specific procedure for accomplishing some job. Much of computer science has to do with
finding or creating better algorithms for solving computational problems.
We usually describe computational algorithms using pseudocode, and we characterize the performance of
algorithms using the term “order of growth” or “theta.” The order of growth of an algorithm tells us, in a simplified
way, how the running time of the algorithm will vary with problems of different sizes. We provided examples
of algorithms whose orders of growth were (lg n), n, n(lg n), n2, 2n and n!.
Algorithm development should be considered an important part of computing technology. In fact, a better
algorithm for an important task may be much more impactful than any foreseeable near-term improvement in
computing hardware speed.
The Turing machine is a formal mathematical model of computation, and the Church–Turing thesis
maintains that any algorithmic procedure to manipulate symbols can be conducted by some Turing machine.
We gave example Turing machines to perform the simple binary operations of complementing and incrementing
a binary number.
Some problems in computing are provably unsolvable. For instance, Turing proved that it is impossible to
write one computer program that can inspect any other program and verify that the program in question will, or
will not, run to completion, given any specific set of inputs. While the “Holy Grail” of an algorithm to prove the
correctness of programs has been proven to be only a phantom in the dreams of computer scientists, computer
scientists at least know that is so, and can work instead on practical test plans for real programs.
No comments:
Post a Comment