Algorithms Flashcards ionicons-v5-c

algorithms elements

- Sequential operations- Actions based on the state of a data structure- Iteration, repeating an action a number of times- Recursion, calling itself on a subset of inputs

algorithm design paradigms

- Divide and conquer- Greedy algorithms- Dynamic programming

divide and conquer paradigm

the ... involves breaking a problem into smaller sub problems, and then in some way combining the results to obtain a global solution. This is a very common and natural problem solving technique, and is, arguably, the most commonly used approach to algorithm design.

Greedy algorithm

...s often involve optimization and combinatorial problems; the classic example is applying it to the traveling salesperson problem, where a ... approach always chooses the closest destination first. This shortest path strategy involves finding the best solution to a local problem in the hope that this will lead to a global solution. Examples: Prim's algorithm for finding the minimum spanning trees, the Knapsack problem, and the Travelling Salesman problem

dynamic programming

The ... approach is useful when our sub problems overlap. This is different from divide and conquer. Rather than break our problem into independent sub problems, with ..., intermediate results are cached and can be used in subsequent operations. Like divide and conquer it uses recursion; however, ... allows us to compare results at different stages. This can have a performance advantage over divide and conquer for some problems because it is often quicker to retrieve a previously calculated result from memory rather than having to recalculate it.

recursion

... is particularly useful for divide and conquer problems; however, it can be difficult to understand exactly what is happening, since each recursive call is itself spinning off other recursive calls. At the core of a recursive function are two types of cases: base cases, which tell the ... when to terminate, and recursive cases that call the function they are in. A simple problem that naturally lends itself to a recursive solution is calculating factorials.

Backtracking

... is a form of recursion that is particularly useful for types of problems such as traversing tree structures, where we are presented with a number of options at each node, from which we must choose one. Subsequently we are presented with a different set of options, and depending on the series of choices made either a goal state or a dead end is reached. If it is the latter, we must backtrack to a previous node and traverse a different branch. ... is a divide and conquer method for exhaustive search. Importantly ... prunes branches that cannot give a result.

Asymptotic analysis

There are essentially three things that characterize an algorithm's runtime performance. They are:- Worst case - Use an input that gives the slowest performance- Best case - Use an input that give, the best results- Average case - Assumes the input is randomTo calculate each of these, we need to know the upper and lower bounds. We have seen a way to represent an algorithm's runtime using mathematical expressions, essentially adding and multiplying operations. To use ..., we simply create two expressions, one each for the best and worst cases.

big O notation

In computer science, ... is used to classify algorithms according to how their running time or space requirements grow as the input size grows upper bound .In analytic number theory, ... is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem.

O(1)

Big O notation - name: constant example operations: append, get item, set item.

O(logn)

Big O notation - name: logarithmic example operations: Finding an element in a sorted array.

O(n)

Big O notation - name: linear example operations: copy, insert, delete, iteration.

nLogn

name: logarithmic linear example operations: Sort a list, merge - sort.

n²

name: quadraticexample operations: Find the shortest path between two nodes in a graph. Nested loops.

n³

name: cubicMatrix multiplication.

2^n

name: exponential'Towers of Hanoi' problem, backtracking.

Omega notation (Ω)

In a similar way that Big O notation describes the upper bound, ... describes a tight lower bound.The objective is to give the largest rate of growth that is equal to or less than the given algorithms, T(n), rate of growth.

Theta notation (Ï´)

It is often the case where both the upper and lower bounds of a given function are the same and the purpose of ... is to determine if this is the case.

timeit

used for measuring the execution time of codestatement = "bubble_sort(array=arrayx)"total_time = ...(stmt=statement, globals=globals(),number=nn)*1000

Amortized analysis

Often we are not so interested in the time complexity of individual operations, but rather the time averaged running time of sequences of operations. This is called .... It is different from average case analysis, which we will discuss shortly, in that it makes no assumptions regarding the data distribution of input values. It does, however, take into account the state change of data structures. For example, if a list is sorted it should make any subsequent find operations quicker. ... can take into account the state change of data structures because it analyzes sequences of operations, rather than simply aggregating single operations.

hashing

A ... algorithm is a cryptographic hash function. It is a mathematical algorithm that maps data of arbitrary size to a hash of a fixed size. It's designed to be a one-way function, infeasible to invert.

graph

A ... is a set of vertices and edges that form connections between the vertices. In a more formal approach, a ... G is an ordered pair of a set V of vertices and a set E of edges given as G = (V, E) in formal mathematical notation.

Undirected Graph

A graph that contains edges between vertices with no specific direction associated with any edge.

Directed Graph

This is a graph where an edge has a direction associated with it, for example, a plane flight that takes off in one location and arrives in another. The return flight would be considered a separate edge.

Weighted graph

A graph in which there is a number associated with each edge [its weight].

adjacency list

a representation of a graph in which each node has a list of nodes that are adjacent to it, i.e. connected to it by an arc.

Adjacency Matrix

A matrix which records the number of direct links between vertices https://o.quizlet.com/kgFgYOfukcN7bxUu.lYTFw_m.jpg

An algorithm for traversing or searching a tree or graph data structures. It typically starts at the tree root and explores the neighbor nodes first, before moving to the next level neighbors.

a search in which children of a node are considered (recursively) before siblings are considered.

Priority Queue

A ... is basically a type of queue that will always return items in order of priority. This priority could be, for example, that the lowest item is always popped off first. Although it is called a queue, ...s are often implemented using a heap, since it is very efficient for this purpose.

heap

A ... is a data structure that satisfies the ... property. The ... property states that there must be a certain relationship between a parent node and its child nodes. This property must apply through the entire ...

Min Heap

the keys of parent nodes are less than or equal to those of the children and the lowest key is in the root node. Binary tree

Max Heap

the keys of parent nodes are always greater than or equal to those of the children and the greatest key is in the root node. Binary tree

a method for finding a target value within a list. It sequentially checks each element of the list for the target value until a match is found or until all the elements have been searched.

Looking for an item in an already sorted list by eliminating large portions of the data on each comparison O(log n)

... search solves the same problem as binary search, but requires knowledge of the distribution of elements in an array. O(log (log n))

Insertion Sort

A simple sorting algorithm that builds the final sorted array (or list) one item at time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. O(n^2)

Selection Sort

A sorting routine that uses a nested loop process to systematically select the best value among the unsorted elements of the array for the next position in the array, starting with position zero all the way to the end. O(n^2)

Quick Sort

Non-stable, in place sort with an order of growth of NlogN. Needs logN of extra space. It has a probabilistic guarantee. Works by making use of a divide and conquer method. The array is divided into two parts, and then the parts are sorted independently. An arbitrary value is chosen as the partition. Afterwards, all items which are larger than this value go to the right of it, and all items which are less than this value go to the left of it. We arbitrarily choose a[lo] as a partitioning item. Then we scan from the left end of the array one by one until we find an entry that is greater than a[lo]. At the same time, we are scanning from a[lo] to the right to find an entry that is less than or equal to a[lo]. Once we find these two values, we swap them.

Heap Sort

Non-stable, in place sort which has an order of growth of O(NlogN). Requires only one spot of extra space. Works like an improved version of selection sort. It divides its input into a sorted and unsorted region, and iteratively shrinks the unsorted region by extracting the smallest element and moving it into the sorted region. It will make use of a heap structure instead of a linear time search to find the minimum.

Memoization

This dynamic programming technique starts from the initial problem set and divides it into small subproblems. After the solution to a subprogram has been determined, we store the result to that particular subproblem. In the future, when this subproblem is encountered, we only return its pre-computed result.

Tabulation

In this dynamic programming technique, we settle on an approach where we fill a table of solutions to subproblems and then combine them to solve bigger problems.

P-type problems

In computer science, the class of problems that computers can solve within polynomial time using a step-wise process of logical steps is called ..., where P stands for polynomial. These are relatively easy to solve.

NP-Type problems

Then there is another class of problems that is considered very hard to solve. The word "hard problem" is used to refer to the way in which problems increase in difficulty when trying to find a solution. However, the good thing is that despite the fact that these problems have a high growth rate of difficulty, it is possible to determine whether a proposed solution solves the problem in polynomial time. These are the .... NP here stands for nondeterministic polynomial time.

NP-Hard

A problem is ... if all other problems in NP can be polynomial time reducible or mapped to it.

NP-Complete

problems with no practical algorithmic solutions. A problem is considered an ... problem if it is first of all an NP hard and is also found in the NP class.