-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathconvex_optimization_visualization.py
More file actions
78 lines (61 loc) · 3.06 KB
/
convex_optimization_visualization.py
File metadata and controls
78 lines (61 loc) · 3.06 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
## ---- python Convex Optimization visualization the process
# Visualizing the process of **Convex Optimization** typically involves showing how an algorithm (such as gradient descent) iterates toward the minimum of a convex function. Here's a general way to visualize this process in Python using `matplotlib` and `numpy`.
# ### Steps:
# 1. **Generate a convex function**: For simplicity, use a quadratic function (e.g., \( f(x) = x^2 \)).
# 2. **Apply an optimization algorithm**: Use gradient descent as the optimization algorithm.
# 3. **Visualize the process**: Plot the function and show how the optimization algorithm converges to the minimum.
# Here's how you can implement it:
# ### Python Code:
# ```python
import numpy as np
import matplotlib.pyplot as plt
# Convex function (quadratic function)
def f(x):
return x ** 2
# Derivative of the convex function (gradient)
def grad_f(x):
return 2 * x
# Gradient descent algorithm
def gradient_descent(starting_point, learning_rate, num_iterations):
x = starting_point
trajectory = [x]
for _ in range(num_iterations):
x = x - learning_rate * grad_f(x)
trajectory.append(x)
return np.array(trajectory)
# Visualization of the optimization process
def visualize_optimization(trajectory):
# Define x values and their corresponding function values
x_vals = np.linspace(-3, 3, 400)
y_vals = f(x_vals)
plt.figure(figsize=(8, 6))
plt.plot(x_vals, y_vals, label=r'$f(x) = x^2$', color='blue')
# Plot the trajectory of the optimization algorithm
for i, x in enumerate(trajectory):
plt.plot(x, f(x), 'ro') # Mark the point
plt.text(x, f(x), f'Iter {i}', fontsize=10)
if i > 0:
plt.arrow(trajectory[i-1], f(trajectory[i-1]),
trajectory[i] - trajectory[i-1],
f(trajectory[i]) - f(trajectory[i-1]),
head_width=0.1, head_length=0.1, fc='green', ec='green')
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Convex Optimization using Gradient Descent')
plt.legend()
plt.grid(True)
plt.show()
# Parameters for gradient descent
starting_point = 2.5
learning_rate = 0.1
num_iterations = 10
# Get the trajectory of the optimization process
trajectory = gradient_descent(starting_point, learning_rate, num_iterations)
# Visualize the optimization process
visualize_optimization(trajectory)
# ```
# ### Explanation:
# 1. **Convex function**: \( f(x) = x^2 \), which is a simple convex function.
# 2. **Gradient descent**: The optimization algorithm is performed over a set number of iterations, updating \( x \) based on the gradient (which is \( 2x \)).
# 3. **Visualization**: The function is plotted, and each iteration of the optimization is marked. The movement of the points toward the minimum is visualized with arrows.
# This code will generate a visualization where you can see the steps of the gradient descent as it moves towards the minimum at \( x = 0 \). You can adjust the learning rate and the number of iterations to observe different behaviors of the optimization process.