An empirical approach for analyzing the run-time of algorithms

An empirical approach for analyzing the run-time of algorithms

6 years ago
Anonymous $CLwNLde341

https://medium.com/@swainson/the-3-steps-in-an-empirical-approach-for-analyzing-the-run-time-of-algorithms-f48d90e18908

IImagine an algorithm which solves a problem at a small scale. The algorithm works perfectly and you forget about it so as to carry on with your project and meet that deadline looming over you like dark clouds almost ready for an outpouring of golf sized spiky hailstones. Elated with its efficiency you base several functions or algorithms on it. Then one day the unbelievable happens, the program must run on a large scale and it completely fails. At a small scale it implements the solution in nanoseconds but at the larger scale, you watch it slow to the pace of a sloth taking hours if not days to run. You’ve heard of this thing called big O notation and that it helps with this sort of thing but think it’s some type of superhero who solves one’s problems if you offer some sort of blood sacrifice. You could wing it you think but not before the deadline. How then does one solve this problem? Here is a hint, use run cases to measure the algorithm with maximal input. One can think of it as an experimental based approach for finding the run-time of a code at certain input points and fix those areas to deliver an optimal program.

Essentially we create a graph or some way to measure our algorithm with some benchmark and assumption as an experiment to gauge the complexity of the code.

Last Seen
3 hours ago
Reputation
0
Spam
0.000
Last Seen
20 minutes ago
Reputation
0
Spam
0.000
Last Seen
17 minutes ago
Reputation
0
Spam
0.000