diff --git a/strings_arrays/binary_search.md b/strings_arrays/binary_search.md
index 036889c..f1b64a9 100644
--- a/strings_arrays/binary_search.md
+++ b/strings_arrays/binary_search.md
@@ -1,10 +1,10 @@
-Binary search is a method for locating an element in a sorted list efficiently. Searching for an element can done naively in **O(N)** time, but binary search speeds it up to **O(log N)**. Binary search is a great tool to keep in mind for array problems.
+Binary search is a technique for efficiently locating an element in a sorted list. Searching for an element can done naively in **O(n)** time by checking every element in the list, but binary search's optimization speeds it up to **O(log n)**. Binary search is a great tool to keep in mind for array problems.
Algorithm
------------------
-In binary search, you are provided a list of sorted numbers and a key. The desired output is the index of the key, if it exists and None if it doesn't.
+In binary search, you are provided a sorted list of numbers and a key. The desired output of a binary search is the index of the key in the sorted list, if the key is in the list, or ```None``` otherwise.
-Binary search is a recursive algorithm. The high level approach is that we examine the middle element of the list. The value of the middle element determines whether to terminate the algorithm (found the key), recursively search the left half of the list, or recursively search the right half of the list.
+Binary search is a recursive algorithm. From a high-level perspective, we examine the middle element of the list, which determines whether to terminate the algorithm (found the key), recursively search the left half of the list (middle element value > key), or recursively search the right half of the list (middle element value < key).
```
def binary_search(nums, key):
if nums is empty:
@@ -13,7 +13,7 @@ def binary_search(nums, key):
return middle index
if middle element is greater than key:
binary search left half of nums
- if middle element is less than
+ if middle element is less than
binary search right half of nums
```
@@ -21,12 +21,12 @@ There are two canonical ways of implementing binary search: recursive and iterat
### Recursive Binary Search
-The recursive solution utilizes a helper function to keep track of pointers to the section of the list we are currently examining. The search either completes when we find the key, or the two pointers meet.
+The recursive approach utilizes a helper function to keep track of pointers to the section of the list we are currently examining. The search either terminates when we find the key or if the two pointers meet.
```python
def binary_search(nums, key):
return binary_search_helper(nums, key, 0, len(nums))
-
+
def binary_search_helper(nums, key, start_idx, end_idx):
middle_idx = (start_idx + end_idx) // 2
if start_idx == end_idx:
@@ -41,7 +41,7 @@ def binary_search_helper(nums, key, start_idx, end_idx):
### Iterative Binary Search
-The iterative solution manually keeps track of the section of the list we are examining, using the two-pointer technique. The search either completes when we find the key, or the two pointers meet.
+The iterative approach manually keeps track of the section of the list we are examining using the two-pointer technique. The search either terminates when we find the key, or the two pointers meet.
```python
def binary_search(nums, key):
left_idx, right_idx = 0, len(nums)
@@ -58,7 +58,7 @@ def binary_search(nums, key):
## Runtime and Space Complexity
-Binary search completes in **O(log N)** time because each iteration decreases the size of the list by a factor of 2. Its space complexity is constant because we only need to maintain two pointers to locations in the list. Even the recursive solution has constant space with [tail call optimization](https://en.wikipedia.org/wiki/Tail_call).
+Binary search has **O(log n)** time complexity because each iteration decreases the size of the list by a factor of 2. Its space complexity is constant because we only need to maintain two pointers. Even the recursive solution has constant space with [tail call optimization](https://en.wikipedia.org/wiki/Tail_call).
## Example problems
* [Search insert position](https://leetcode.com/problems/search-insert-position/description/)
diff --git a/strings_arrays/sorting.md b/strings_arrays/sorting.md
index 500a4ad..a96f751 100644
--- a/strings_arrays/sorting.md
+++ b/strings_arrays/sorting.md
@@ -1,8 +1,8 @@
Sorting is a fundamental tool for tackling problems, and is often utilized to help simplify problems.
-There are several different sorting algorithms, each with different tradeoffs. In this guide, we will cover several well-known sorting algorithms along with when they are useful.
+There are several different sorting algorithms, each with different tradeoffs. In this guide, we will cover several well-known sorting algorithms along with when they are useful.
-We will go into detail for merge sort and quick sort, but will describe the rest at a high level.
+We will describe merge sort and quick sort in detail and the remainder of the featured sorting algorithms at a high level.
## Terminology
Two commonly used terms in sorting are:
@@ -11,11 +11,11 @@ Two commonly used terms in sorting are:
2. **stable sort**: retains the order of duplicate elements after the sort ([3, 2, 4, **2**] -> [2, **2**, 3, 4])
## Merge sort
-**Merge sort** is perhaps the simplest sort to implement and has very consistent behavior. It adopts a divide-and-conquer strategy: recursively sort each half of the list, and then perform an O(N) merging operation to create a fully sorted list.
+**Merge sort** is perhaps the simplest sort to implement and has very consistent behavior. It adopts a divide-and-conquer strategy: recursively sort each half of the list, and then perform an O(n) merging operation to create a fully sorted list.
### Implementation
-The key operation in merge sort is `merge`, which is a function that takes two sorted lists and returns a single list which is sorted.
+The key operation in merge sort is `merge`, which is a function that takes two sorted lists and returns a single sorted list composed of elements of the combined lists.
```python
def merge(list1, list2):
if len(list1) == 0:
@@ -31,7 +31,6 @@ This is a recursive implementation of `merge`, but an iterative implementation w
Given this `merge` operation, writing merge sort is quite simple.
-
```python
def merge_sort(nums):
if len(nums) <= 1:
@@ -43,18 +42,17 @@ def merge_sort(nums):
```
### Runtime
-
-Merge sort is a recursive, divide and conquer algorithm. It takes O(log N) recursive merge sorts and each merge is O(N) time, so we have a final runtime of O(N log N) for merge sort. Its behavior is consistent regardless of the input list (its worst case and best case take the same amount of time).
+Merge sort is a recursive, divide and conquer algorithm. It takes O(log n) recursive merge sorts and each merge is O(n) time, so we have a final runtime of O(n log n) for merge sort. Its behavior is consistent regardless of the input list (its worst case and best case take the same amount of time).
**Summary**
-* Worst case: O(N log N)
-* Best case: O(N log N)
-* Stable: yes
-* In-place: no
+
+| Worst case | Best case | Stable | In-place|
+|:----------:|:---------:|:------:|:-------:|
+| O(n log n) | O(n log n) | ✅ | ❌ |
## Quick sort
-**Quick sort** is also a divide and conquer strategy, but uses a two-pointer swapping technique instead of `merge`. The core idea of quick sort is to select a "pivot" element in the list (typically the middle element), and swap elements in the list such that everything left of the pivot is less than it, and everything right of the pivot is greater. We call this operation `partition`. Quick sort is notable in its ability to sort efficiently in-place.
+**Quick sort** is also a divide and conquer strategy, but uses a two-pointer swapping technique instead of `merge`. The core idea of quick sort is selecting a "pivot" element in the list (typically the middle element), and swapping elements in the list such that everything left of the pivot is less than it, and everything right of the pivot is greater. We call this operation `partition`. Quick sort is notable for its ability to sort efficiently in-place.
```python
def partition(nums, left_idx, right_idx):
@@ -70,7 +68,7 @@ def partition(nums, left_idx, right_idx):
left_idx += 1
right_idx -= 1
```
-The partition function modifies `nums` inplace and takes up no extra memory. It also takes O(N) time in the worst case to fully partition a list.
+The partition function modifies `nums` in-place and requires no extra memory. It also takes O(n) time worst case to fully partition a list.
```python
def quick_sort_helper(nums, left_idx, right_idx):
@@ -88,56 +86,62 @@ def quick_sort(nums):
### Runtime
-The best case performance of quick sort is O(N log N), but depending on the structure of the list, quick sort's performance can vary.
+The best case performance of quick sort is O(n log n), but depending on the structure of the list, quick sort's performance can vary.
-If the pivot happens to be the median of the list, then the list will be divided in half after the partition.
+If the pivot happens to be the median of the list, then the list will be divided in half after the partition.
In the worst case, however, the list will be divided into an N - 1 length list and an empty list. Thus, in the worst possible case, quick sort has O(N2) performance, since we'll have to recursively quicksort (N - 1), (N - 2), ... many lists. However, on average and in practice, quick sort is still very fast due to how fast swapping array elements is.
The space complexity for this version of quick sort os O(log N), due to the number of call stacks created during recursion, but an iterative version can make space complexity O(1).
**Summary**
-* Worst case: O(N2)
-* Best case: O(N log N)
-* Stable: no
-* In-place: yes
+
+| Worst case | Best case | Stable | In-place|
+|:----------:|:---------:|:------:|:-------:|
+| O(n2) | O(n log n)| ❌ | ✅ |
## Insertion sort
In **insertion sort**, we incrementally build a sorted list from the unsorted list. We take elements from the unsorted list and insert them into the sorted list, making sure to maintain the order.
-This algorithm takes O(N2) worst time, because looping through the unsorted list takes O(N) and finding the proper place to insert can take O(N) time in the worst case. However, if the list is already sorted, insertion sort takes O(N) time, since insertion time will be O(1). Insertion sort can be done in-place, so it takes up O(1) space.
+This algorithm takes O(n2) worst time, because looping through the unsorted list takes O(n) and finding the proper place to insert can take O(n) time in the worst case. However, if the list is already sorted, insertion sort takes O(n) time, since insertion time will be O(1). Insertion sort can be done in-place, so it takes up O(1) space.
-Insertion sort is easier on linked lists, which have O(1) insertion whereas arrays have O(N) insertion because in an array, inserting an element requires shifting all the elements behind that element.
+Insertion sort is easier on linked lists, which have O(1) insertion whereas arrays have O(n) insertion because in an array, inserting an element requires shifting all the elements behind that element.
**Summary**
-* Worst case: O(N^2^)
-* Best case: O(N)
-* Stable: yes
-* In-place: yes
+
+| Worst case | Best case | Stable | In-place|
+|:----------:|:---------:|:------:|:-------:|
+| O(n2) | O(n)| ✅ | ✅ |
## Selection sort
**Selection sort** incrementally builds a sorted list by finding the minimum value in the rest of the list, and swapping it to be in the front.
-It takes O(N2) time in general, because we have to loop through the unsorted list which is O(N) and in each iteration, we search the rest of the list which always takes O(N). Selection sort can be done in-place, so it takes up O(1) space.
+It takes O(n2) time in general, because we have to loop through the unsorted list which is O(n) and in each iteration, we search the rest of the list which always takes O(n). Selection sort can be done in-place, so it takes up O(1) space.
-**Summary**
-* Worst case: O(N2)
-* Best case: O(N2)
-* Stable: no
-* In-place: yes
+| Worst case | Best case | Stable | In-place|
+|:----------:|:---------:|:------:|:-------:|
+| O(n2) | O(N2)| ❌ | ✅ |
## Radix sort
**Radix sort** is a situational sorting algorithm when you know that the numbers you are sorting are bounded in some way. It operates by grouping numbers in the list by digit, looping through the digits in some order.
-For example, if we had the list [100, 10, 1], radix sort would put 100 in the group which had 1 in the 100s digit place and would put (10, 1) in a group which had 0 in the 100s digit place. It would then sort by the 10s digit place, and finally the 1s digit place.
+For example, if we had the list ```[100, 10, 1]```, radix sort would put 100 in the group which had 1 in the 100s digit place and would put (10, 1) in a group which had 0 in the 100s digit place. It would then sort by the 10s digit place, and finally the 1s digit place.
Radix sort thus needs one pass for each digit place it is sorting and takes O(KN) time, where K is the number of passes necessary to cover all digits.
-**Summary**
-* Worst case: O(KN)
-* Best case: O(KN)
-* Stable: yes (if going through digits from right to left)
-* In-place: no
+| Worst case | Best case | Stable | In-place|
+|:----------:|:---------:|:------:|:-------:|
+| O(kn) | O(kn)| ✅ (if going through digits from right to left) | ❌ |
+
+## Summary
+
+|Sort | Worst case | Best case | Stable | In-place|
+|:-:||:----------:|:---------:|:------:|:-------:|
+|Merge sort | O(n log n) | O(n log n) | ✅ | ❌ |
+|Quick sort | O(n2) | O(n log n)| ❌ | ✅ |
+| Insertion sort | O(n2) | O(n)| ✅ | ✅ |
+|Selection sort | O(n2) | O(N2)| ❌ | ✅ |
+|Radix sort| O(kn) | O(kn)| ✅ (if going through digits from right to left) | ❌ |
diff --git a/strings_arrays/sorting_colors.md b/strings_arrays/sorting_colors.md
index 3fa6156..f47b0c2 100644
--- a/strings_arrays/sorting_colors.md
+++ b/strings_arrays/sorting_colors.md
@@ -12,18 +12,17 @@ Example:
```
## Approach #1: Merge or quick sort
+### Approach
+The problem is asking us to sort a list of integers, so we can use an algorithm like merge sort or quick sort.
-**Approach**
-The problem is asking us to sort a list of integers, so we could potentially use an algorithm like merge sort or quick sort.
-
-**Time and space complexity**
-With a sorting algorithm such as is O(N log N) in the worst case. The space complexity is O(1) since we sort in place.
+### Time and space complexity
+Both of these sorting algorithms have O(n log n) worst case time complexity and, because we sort in-place, O(1) space complexity.
## Approach #2: Counting sort
-**Approach**
-We know that the numbers we are sorting are 0, 1, or 2. This leads to an efficient counting sort implementation, since we can just count the numbers of each and modify the list in place to match the counts in sorted order.
+### Approach
+We know that the numbers we are sorting are 0, 1, or 2. This means we can sort more efficiently by simply counting the numbers of times each of the three values occurs and modifying the list in-place to match the counts in sorted order.
-**Implementation**
+### Implementation
```python
from collections import defaultdict
def sort_colors(colors):
@@ -42,41 +41,40 @@ def sort_colors(colors):
idx += 1
```
-**Time and space complexity**
-This solution has complexity O(N), since we loop through the list once, then loop through the dictionary to modify our list, both of which take N time. This solution takes up O(1) space, since everything is done in place and the counts dictionary has a constant size.
+### Time and space complexity
+This solution has complexity O(n), since we loop through the list once, then loop through the entire dictionary to modify our list. This solution takes up O(1) space, since everything is done in place and the dictionary has a constant size.
## Approach #3: Three-way partition
This approach uses multiple pointers. Reading the [two pointer guide](https://guides.codepath.com/compsci/Two-pointer) may be helpful.
-**Approach**
-Although we cannot asymptotically do better than O(N) since we need to pass through the list at least once, we can limit our code to only making one pass. This will be slightly faster than approach #2.
+### Approach
+Although we cannot asymptotically do better than O(n), since we need to pass through the list at least once, we can limit our code to only making one pass. This will be slightly faster than approach #2.
-We can accomplish this by seeing that sorting an array with three distinct elements is equivalent to a `partition` operation. Recall that in quick sort, we partition an array to put all elements less than a pivot to the left and greater than to a right. Since we only have three potential values in our list, partitioning using the middle value as a pivot will effectively sort the list.
+We can accomplish this by recognizing that sorting an array with three distinct elements is equivalent to a partition operation. Recall that in quick sort, we partition an array to put all elements with values less than a pivot on the left and elements with values greater than a pivot to a right. Since we only have three potential values in our list, partitioning using the middle value as a pivot will effectively sort the list.
This particular type of partition is a bit tricky though because we're partitioning on the middle element (the 1's) of our list. It's called a three-way partition, since we are also grouping together elements that are equal in the middle (the 1's).
-**Implementation**
-
+### Implementation
```python
def sort_colors(colors):
left, middle, right = 0, 0, len(colors) - 1
while middle <= right:
if colors[middle] == 0:
- colors[middle], colors[left] = colors[left], colors[middle]
+ colors[middle], colors[left] = colors[left], colors[middle]
left += 1
middle += 1
elif colors[middle] == 1:
middle += 1
elif colors[middle] == 2:
- colors[middle], colors[right] = colors[right], colors[middle]
+ colors[middle], colors[right] = colors[right], colors[middle]
right -= 1
middle += 1
```
-**Time and space complexity**
-This solution has also has complexity O(N), but only takes one pass since it uses two pointers that stop moving when one moves past the other.
+### Time and space complexity
+This solution has also has time complexity O(n), but only takes one pass since it uses two pointers that stop moving when one moves past the other.
It is slightly faster than the counting sort and is O(1) space, since it is in-place.
diff --git a/strings_arrays/strings_arrays.md b/strings_arrays/strings_arrays.md
index 5713344..3cf5d07 100644
--- a/strings_arrays/strings_arrays.md
+++ b/strings_arrays/strings_arrays.md
@@ -1,139 +1,38 @@
-The **two pointer method** is a helpful technique to always keep in mind when working with strings and arrays questions. It's a clever optimization that can help reduce time complexity with no added space complexity (a win-win!) by utilizing extra pointers to avoid repetitive operations.
+## Arrays
+An **array** is a data structure that holds a fixed number of objects. Because arrays have fixed sizes, they are highly efficient for quick lookups regardless of how their size. However, there is a tradeoff with this fast access time: any insertion or deletion from the middle of the array requires moving the rest of the elements to fill in or close the gap. To optimize time efficiency, try to add and delete mostly from the end of the array.
-This approach is best demonstrated through a walkthrough, as done below.
+Arrays commonly come up in interviews, so it's important to review the array library for the language you code in.
-## Problem: Minimum size subarray sum
+**Tips:**
+* Off-by-one errors can happen often with arrays, so be wary of potentially over-indexing as it will throw an error
+* Try to add elements to the back of an array instead of the front, as adding to the front requires shifting every element back
+* In Java, arrays are a fixed size so consider utilizing an [ArrayList](https://docs.oracle.com/javase/8/docs/api/java/util/ArrayList.html) instead if you need to dynamically alter the size of the array.
-Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum ≥ s. If there isn't one, return 0 instead.
+## Strings
+**Strings** are a special kind of array – one that only contains characters. They commonly come up in interview questions, so it's important to go through the string library for the language you're most comfortable with. You should know common operations such as: getting the length, getting a substring, splitting a string based on a delimiter, etc.
-**Example:**
-```python
->>> min_sub_array_length([2,3,1,2,4,3], 7)
-2
-```
+It's important to note that whenever you mutate a string, a new copy of the string is created. There are different ways to reduce the space utilized depending on the language:
+* In Python, you can represent a string as a list of characters and operate on the list of character instead.
+* In Java, you can utilize the [StringBuffer](https://docs.oracle.com/javase/7/docs/api/java/lang/StringBuffer.html) class to mitigate the amount of space utilized if you need to mutate a string.
-Explanation: the subarray [4,3] has the minimal length under the problem constraint.
+## Patterns List
+* [Two pointer](https://guides.codepath.com/compsci/Two-pointer)
+* [Binary Search](https://guides.codepath.com/compsci/Binary-Search)
-## Solution #1: Brute Force
-**Approach**
+### Strings
+#### General guide
+* [Coding for Interviews Strings Guide](http://blog.codingforinterviews.com/string-questions/)
-When first given a problem, if an optimal solution is not immediately clear, it's better to have a solution that works rather than to be stuck. With this problem, a brute force solution would be to generate all possible subarrays and find the length of the shortest subarray that sums up to a sum that is greater than or equal to the given number.
+#### Strings in C++
+ * [Character Arrays in C++](https://www.youtube.com/watch?v=Bf8a6IC1dE8)
+ * [Character Arrays in C++ Part 2](https://www.youtube.com/watch?v=vFZTxvUoZSU)
-**Implementation**
-```python
-def min_sub_array_length(nums, sum):
- min_length = float("inf")
- for start_idx in range(len(nums)):
- for end_idx in range(start_idx, len(nums)):
- subarray_sum = get_sum(nums, start_idx, end_idx)
- if subarray_sum >= sum:
- min_length = min(min_length, end_idx - start_idx + 1)
- return min_length if min_length != float("inf") else 0
-
-def get_sum(nums, start_index, end_index):
- result = 0
- for i in range(start_index, end_index + 1):
- result += nums[i]
- return result
-```
-
-**Time and space complexity**
-
-The time complexity of this solution would be O(N3). The double for loop results in O(N2) calls to get_sum and each call to get_sum has a worst case run time of O(N), which results in a O(N2 * N) = **O(N3) runtime**.
-
-The space complexity would be **O(1)** because the solution doesn't create new data structures.
-
-## Improvements
-#### Optimization #1: Keep track of a running sum instead of running `get_sum` in each iteration of the inner `end_idx` for loop
-In the brute solution, a lot of repetitive calculations are done in the inner `end_idx` for loop with the `get_sum` function. Instead of having to recalculate the sum from elements `start_idx` to `end_idx` within every iteration of the `end_idx` loop, we can store a `subarray_sum` variable to store calculations from previous iterations and simply add to it within each iteration of the `end_idx` loop.
-
-```python
-def min_sub_array_length(nums, sum):
- min_length = float("inf")
- for start_idx in range(len(nums)):
- subarray_sum = 0
- for end_idx in range(start_idx, len(nums)):
- subarray_sum += nums[end_idx]
- if subarray_sum >= sum:
- min_length = min(min_length, end_idx - start_idx + 1)
- return min_length if min_length != float("inf") else 0
-```
-
-This optimization reduces the time complexity from O(N3) to O(N2) with the addition of a variable to store the accumulating sum.
-
-
-#### Optimization #2: Reduce number of calculations by terminating the inner `end_idx` for loop early
-With the improved solution, we can further reduce the number of iterations in the inner for loop by terminating it early. Once we have a `subarray_sum` that is equal to or greater than the target sum, we can simply move to the next iteration of the outer for loop. This is because the questions asks for minimum length subarray and any further iterations of the inner for loop would only cause an increase in the subarray length.
-
-```python
-def min_sub_array_length(nums, sum):
- min_length = float("inf")
- for start_idx in range(len(nums)):
- subarray_sum = 0
- for end_idx in range(start_idx, len(nums)):
- subarray_sum += nums[end_idx]
- if subarray_sum >= sum:
- min_length = min(min_length, end_idx - start_idx + 1)
- continue
- return min_length if min_length != float("inf") else 0
-```
-
-This is a minor time complexity improvement and this solution will still have a worst case runtime of O(N2). The improvement is nice, but to reduce the runtime from O(N2) to O(N), we would need to somehow eliminate the inner for loop.
-
-
-## Solution #2: Two pointer approach
-**Approach**
-The optimal, two pointer approach to this problem utilizing the observations we made in the previous section. The main idea of this approach is that we grow and shrink an interval as we loop through the list while keeping a running sum that we update as we alter the interval.
-
-There will be two pointers, one to track the start of the interval and the other to track the end. They will both start at the beginning of the list and will dynamically move to the right until we hit the end of the list.
-
-First, we grow the interval to the right until it exceeds the minimum sum. Once we find that interval, we move the start pointer right as much as we can to shrink the interval until it sums up to a number that is smaller than the sum.
-
-Then, we move the end pointer to once again to try and hit the sum with new intervals. If growing the interval by moving the end pointer leads to an interval that sums up to at least the target sum, we need to repeat the process of trying to shrink the interval again by moving the start pointer before further moving the end pointer.
-
-As we utilize these two pointers to determine which intervals to evaluate, we have a variable to keep track of the current sum of the interval as we go along to avoid recalculating it every time one of the pointers moves to the right and another variable to store the length of the shortest interval that sums up to >= the target sum.
-
-This push and pull of the end and start pointer will continue until we finish looping through the list.
-
-
-**Implementation**
-```python
-def min_sub_array_length(nums, sum):
- start_idx = 0
- min_length, subarray_sum = float('inf'), 0
-
- for end_idx in range(len(nums)):
- subarray_sum += nums[end_idx]
- while subarray_sum >= sum:
- min_length = min(min_length, end_idx - start_idx + 1)
- subarray_sum -= nums[start_idx]
- start_idx += 1
- if min_length == float('inf'):
- return 0
- return min_length
-```
-
-**Time and space complexity**
-
-The time complexity of this solution would be **O(N)** because each element is at most visited twice. In the worst case scenario, all elements will be visited once by the start pointer and another time by the end pointer.
-
-The space complexity would be **O(1)** because the solution doesn't create new data structures.
-
-**Walkthrough**
-
-Take the example of `min_sub_array_length([2,3,1,2,4,3], 7)`. The left pointer starts at 0 and the right doesn't exist yet.
-
-As we start looping through the list, our first interval is [2]. We won't fulfill the while loop condition until the list reaches [2, 3, 1, 2] whose sum, 8 is >= 7. We then set the `min_length` to 4.
-
-Now, we shrink the interval to [3, 1, 2] by increasing `start_idx` by 1. This new interval sums up to less than the target sum, 7 so we need to grow the interval. In the next iteration, we grow the interval to [3, 1, 2, 4], which has a sum of 10 and once again, we satisfy the while loop condition.
-
-We then shrink the interval to [1, 2, 4]. This is the shortest interval we've come across that sums up to at least the target sum, so we update the `min_length` to 3.
-
-We now move the `end_idx` pointer and it hits the end of the list, with interval [2, 4, 3]. Then shrink the interval to [4, 3], which sums up to 7, the target sum. This is the shortest interval we've come across that sums up to at least the target sum, so we update the `min_length` to 2. This is the final result that is returned.
-
-## Takeaways
-
-This optimization can often be applied to improve solutions that involve the use of multiple for loops, as shown in the example above. If you have an approach that utilizes multiple for loops, analyze the actions within those for loops to determine if repetitive calculations can be removed through strategic movements of multiple pointers.
-
-**Note**: Though this walkthrough demonstrated applying the two pointer approach to an arrays problem, this approach is commonly utilized to solve string problems as well.
+### Arrays
+#### General guide
+ * [InterviewCake Arrays](https://www.interviewcake.com/concept/java/array)
+#### Python arrays
+* [Google developer lists guide](https://developers.google.com/edu/python/lists)
+#### Java arrays
+ * [InterviewCake DynamicArray](https://www.interviewcake.com/concept/java/dynamic-array-amortized-analysis?)
+ * [ArrayList Succinctly Guide](https://code.tutsplus.com/tutorials/the-array-list--cms-20661)
diff --git a/strings_arrays/two_pointer.md b/strings_arrays/two_pointer.md
index 19fb340..5c1db68 100644
--- a/strings_arrays/two_pointer.md
+++ b/strings_arrays/two_pointer.md
@@ -1,6 +1,6 @@
-The **two pointer method** is a helpful technique to always keep in mind when working with strings and arrays questions. It's a clever optimization that can help reduce time complexity with no added space complexity (a win-win!) by utilizing extra pointers to avoid repetitive operations.
+The **two pointer method** is a helpful technique to keep in mind when working with strings and arrays. It's a clever optimization that can help reduce time complexity with no added space complexity (a win-win!) by utilizing extra pointers to avoid repetitive operations.
-This approach is best demonstrated through a walkthrough, as done below.
+This approach is best demonstrated through a walkthrough, as done below.
## Problem: Minimum size subarray sum
@@ -15,12 +15,12 @@ Given an array of n positive integers and a positive integer s, find the minimal
Explanation: the subarray [4,3] has the minimal length under the problem constraint.
## Solution #1: Brute Force
-**Approach**
+### Approach
-When first given a problem, if an optimal solution is not immediately clear, it's better to have a solution that works rather than to be stuck. With this problem, a brute force solution would be to generate all possible subarrays and find the length of the shortest subarray that sums up to a sum that is greater than or equal to the given number.
+When first given a problem, if an optimal solution is not immediately clear, it's better to have any solution that works than be stuck. With this problem, a brute force solution would be to generate all possible subarrays and find the length of the shortest subarray that sums up to a sum that is greater than or equal to the given number.
-**Implementation**
-```python=
+### Implementation
+```python
def min_sub_array_length(nums, sum):
min_length = float("inf")
for start_idx in range(len(nums)):
@@ -37,17 +37,19 @@ def get_sum(nums, start_index, end_index):
return result
```
-**Time and space complexity**
+### Time and space complexity
-The time complexity of this solution would be O(N^3^). The double for loop results in O(N^2^) calls to get_sum and each call to get_sum has a worst case run time of O(N), which results in a O(N^2^ * N) = **O(N^3^) runtime**.
+The time complexity of this solution would be O(n3). The double for loop results in O(n2) calls to get_sum and each call to get_sum has a worst case run time of O(n), which results in a O(n2 * n) = **O(n3) runtime**.
The space complexity would be **O(1)** because the solution doesn't create new data structures.
## Improvements
-#### Optimization #1: Keep track of a running sum instead of running `get_sum` in each iteration of the inner `end_idx` for loop
-In the brute solution, a lot of repetitive calculations are done in the inner `end_idx` for loop with the `get_sum` function. Instead of having to recalculate the sum from elements `start_idx` to `end_idx` within every iteration of the `end_idx` loop, we can store a `subarray_sum` variable to store calculations from previous iterations and simply add to it within each iteration of the `end_idx` loop.
+#### Optimization #1:
+**Keep track of a running sum instead of running `get_sum` in each iteration of the inner `end_idx` for loop**
+
+In the brute solution, a lot of repetitive calculations are done in the inner `end_idx` for loop with the `get_sum` function. Instead of recalculating the sum from elements `start_idx` to `end_idx` in every iteration of the `end_idx` loop, we can store a `subarray_sum` variable to save calculations from previous iterations and simply add to it in each iteration of the `end_idx` loop.
-```python=
+```python
def min_sub_array_length(nums, sum):
min_length = float("inf")
for start_idx in range(len(nums)):
@@ -62,10 +64,12 @@ def min_sub_array_length(nums, sum):
This optimization reduces the time complexity from O(N3) to O(N2) with the addition of a variable to store the accumulating sum.
-#### Optimization #2: Reduce number of calculations by terminating the inner `end_idx` for loop early
+#### Optimization #2:
+**Reduce number of calculations by terminating the inner `end_idx` for loop early**
+
With the improved solution, we can further reduce the number of iterations in the inner for loop by terminating it early. Once we have a `subarray_sum` that is equal to or greater than the target sum, we can simply move to the next iteration of the outer for loop. This is because the questions asks for minimum length subarray and any further iterations of the inner for loop would only cause an increase in the subarray length.
-```python=
+```python
def min_sub_array_length(nums, sum):
min_length = float("inf")
for start_idx in range(len(nums)):
@@ -78,26 +82,26 @@ def min_sub_array_length(nums, sum):
return min_length if min_length != float("inf") else 0
```
-This is a minor time complexity improvement and this solution will still have a worst case runtime of O(N2). The improvement is nice, but to reduce the runtime from O(N2) to O(N), we would need to somehow eliminate the inner for loop.
+This is a minor time complexity improvement and this solution will still have a worst case runtime of O(n2). The improvement is nice, but to reduce the runtime from O(n2) to O(n), we would need to somehow eliminate the inner for loop.
## Solution #2: Two pointer approach
-**Approach**
-The optimal, two pointer approach to this problem utilizing the observations we made in the previous section. The main idea of this approach is that we grow and shrink an interval as we loop through the list while keeping a running sum that we update as we alter the interval.
+### Approach:
-There will be two pointers, one to track the start of the interval and the other to track the end. They will both start at the beginning of the list and will dynamically move to the right until we hit the end of the list.
+The optimal, two pointer approach to this problem utilizing the observations we made in the previous section. The main idea of this approach is that we grow and shrink an interval as we loop through the list, while keeping a running sum that we update as we alter the interval.
-First, we grow the interval to the right until it exceeds the minimum sum. Once we find that interval, we move the start pointer right as much as we can to shrink the interval until it sums up to a number that is smaller than the sum.
+There will be two pointers, one to track the start of the interval and the other to track the end. They will both start at the beginning of the list and move dynamically to the right until they hit the end of the list.
+
+First, we grow the interval to the right until it exceeds the minimum sum. Once we find that interval, we move the start pointer right as much as we can to shrink the interval until it sums to a number that is smaller than the target sum.
Then, we move the end pointer to once again to try and hit the sum with new intervals. If growing the interval by moving the end pointer leads to an interval that sums up to at least the target sum, we need to repeat the process of trying to shrink the interval again by moving the start pointer before further moving the end pointer.
-As we utilize these two pointers to determine which intervals to evaluate, we have a variable to keep track of the current sum of the interval as we go along to avoid recalculating it every time one of the pointers moves to the right and another variable to store the length of the shortest interval that sums up to >= the target sum.
+As we utilize these two pointers to determine which intervals to evaluate, we have a variable to keep track of the current sum of the interval as we go along to avoid recalculating it every time one of the pointers moves to the right, and another variable to store the length of the shortest interval that sums up to >= the target sum.
This push and pull of the end and start pointer will continue until we finish looping through the list.
-
-**Implementation**
-```python=
+### Implementation
+```python
def min_sub_array_length(nums, sum):
start_idx = 0
min_length, subarray_sum = float('inf'), 0
@@ -113,26 +117,24 @@ def min_sub_array_length(nums, sum):
return min_length
```
-**Time and space complexity**
-
-The time complexity of this solution would be **O(N)** because each element is at most visited twice. In the worst case scenario, all elements will be visited once by the start pointer and another time by the end pointer.
+### Time and space complexity
+The time complexity of this solution is **O(n)** because each element is visited at most twice. In the worst case scenario, all elements will be visited once by the start pointer and another time by the end pointer.
The space complexity would be **O(1)** because the solution doesn't create new data structures.
-**Walkthrough**
-
+### Walkthrough
Take the example of `min_sub_array_length([2,3,1,2,4,3], 7)`. The left pointer starts at 0 and the right doesn't exist yet.
-As we start looping through the list, our first interval is [2]. We won't fulfill the while loop condition until the list reaches [2, 3, 1, 2] whose sum, 8 is >= 7. We then set the `min_length` to 4.
+As we start looping through the list, our first interval is [2]. We won't fulfill the while loop condition until the list reaches [2, 3, 1, 2] whose sum, 8 is >= 7. We then set the `min_length` to 4.
-Now, we shrink the interval to [3, 1, 2] by increasing `start_idx` by 1. This new interval sums up to less than the target sum, 7 so we need to grow the interval. In the next iteration, we grow the interval to [3, 1, 2, 4], which has a sum of 10 and once again, we satisfy the while loop condition.
+Now, we shrink the interval to [3, 1, 2] by increasing `start_idx` by 1. This new interval sums up to less than the target sum, 7 so we need to grow the interval. In the next iteration, we grow the interval to [3, 1, 2, 4], which has a sum of 10 and once again, we satisfy the while loop condition.
-We then shrink the interval to [1, 2, 4]. This is the shortest interval we've come across that sums up to at least the target sum, so we update the `min_length` to 3.
+We then shrink the interval to [1, 2, 4]. This is the shortest interval we've come across that sums up to at least the target sum, so we update the `min_length` to 3.
We now move the `end_idx` pointer and it hits the end of the list, with interval [2, 4, 3]. Then shrink the interval to [4, 3], which sums up to 7, the target sum. This is the shortest interval we've come across that sums up to at least the target sum, so we update the `min_length` to 2. This is the final result that is returned.
## Takeaways
-This optimization can often be applied to improve solutions that involve the use of multiple for loops, as shown in the example above. If you have an approach that utilizes multiple for loops, analyze the actions within those for loops to determine if repetitive calculations can be removed through strategic movements of multiple pointers.
+This optimization can often be applied to improve solutions that involve the use of multiple for loops, as demonstrated in the example above. If you have an approach that utilizes multiple for loops, analyze the actions performed in those for loops to determine if repetitive calculations can be removed through strategic movements of multiple pointers.
-**Note**: Though this walkthrough demonstrated applying the two pointer approach to an arrays problem, this approach is commonly utilized to solve string problems as well.
+**Note:** Though this walkthrough demonstrated applying the two pointer approach to an arrays problem, this approach is commonly utilized to solve string problems as well.