Skip to content

Commit 45945d4

Browse files
authored
Merge branch 'main' into main
2 parents f6e1076 + e994c7c commit 45945d4

File tree

9 files changed

+888
-605
lines changed

9 files changed

+888
-605
lines changed

blog/authors.yml

Lines changed: 12 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,18 @@ ajay-dhangar:
33
title: Founder of CodeHarborHub
44
url: https://ajay-dhangar.github.io/
55
image_url: https://avatars.githubusercontent.com/u/99037494?v=4
6+
7+
page: true # Turns the feature on
8+
description: >
9+
A passionate developer who loves to code and build new things. I am a Full Stack Developer and a Cyber Security, ML & AI Enthusiast. I am also a Technical Content Writer and a Speaker. I love to share my knowledge with the community. I am the Founder of CodeHarborHub. I am also a Technical Content Writer at GeeksforGeeks. I am a Girl Script Summer of Code 2024 Project Manager (PA).
610
7-
ajay-dhangar_2024:
8-
name: Ajay Dhangar
9-
title: Software Engineer at OptimumAI
10-
url: https://www.optimumai.in/community
11-
image_url: https://avatars.githubusercontent.com/u/99037494?v=4
12-
13-
14-
ajay-dhangar_2020:
15-
name: Ajay Dhangar
16-
title: B.Tech (CSE) Student
17-
url: https://github.com/ajay-dhangar
18-
image_url: https://avatars.githubusercontent.com/u/99037494?v=4
19-
11+
socials:
12+
x: CodesWithAjay
13+
linkedin: ajay-dhangar
14+
github: ajay-dhangar
15+
stackoverflow: 18530900
16+
newsletter: https://ajay-dhangar.github.io
17+
2018
hitesh-gahanolia:
2119
name: Hitesh Gahanolia
2220
tile: Final Year Student At MNNIT ALLAHABAD
@@ -27,7 +25,7 @@ dharshibalasubramaniyam:
2725
name: Dharshi B.
2826
tile: Software Engineering undergraduate
2927
url: https://github.com/dharshibalasubramaniyam
30-
image_url: https://avatars.githubusercontent.com/u/139672976?s=400&v=4
28+
image_url: https://avatars.githubusercontent.com/u/139672976?s=400&v=4
3129

3230
abhijith-m-s:
3331
name: Abhijith M S
@@ -82,4 +80,3 @@ akshitha-chiluka:
8280
title: Software Engineering Undergraduate
8381
url: https://github.com/AKSHITHA-CHILUKA
8482
image_url: https://avatars.githubusercontent.com/u/120377576?v=4
85-
Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
# Add Adam in Deep Learning Optimizers
2+
3+
This Section contains an explanation and implementation of the Adam optimization algorithm used in deep learning. Adam (Adaptive Moment Estimation) is a popular optimizer that combines the benefits of two other widely used methods: AdaGrad and RMSProp.
4+
5+
## Table of Contents
6+
- [Introduction](#introduction)
7+
- [Mathematical Explanation](#mathematical-explanation)
8+
- [Adam in Gradient Descent](#adam-in-gradient-descent)
9+
- [Update Rule](#update-rule)
10+
- [Implementation in Keras](#implementation-in-keras)
11+
- [Results](#results)
12+
- [Advantages of Adam](#advantages-of-adam)
13+
- [Limitations of Adam](#limitations-of-adam)
14+
15+
16+
## Introduction
17+
18+
Adam is an optimization algorithm that computes adaptive learning rates for each parameter. It combines the advantages of the AdaGrad and RMSProp algorithms by using estimates of the first and second moments of the gradients. Adam is widely used in deep learning due to its efficiency and effectiveness.
19+
20+
## Mathematical Explanation
21+
22+
### Adam in Gradient Descent
23+
24+
Adam optimizes the stochastic gradient descent by calculating individual adaptive learning rates for each parameter based on the first and second moments of the gradients.
25+
26+
### Update Rule
27+
28+
The update rule for Adam is as follows:
29+
30+
1. Compute the first moment estimate (mean of gradients):
31+
32+
$$
33+
m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t
34+
$$
35+
36+
2. Compute the second moment estimate (uncentered variance of gradients):
37+
38+
$$
39+
v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2
40+
$$
41+
42+
3. Correct the bias for the first moment estimate:
43+
44+
$$
45+
\hat{m}_t = \frac{m_t}{1 - \beta_1^t}
46+
$$
47+
48+
4. Correct the bias for the second moment estimate:
49+
50+
$$
51+
\hat{v}_t = \frac{v_t}{1 - \beta_2^t}
52+
$$
53+
54+
5. Update the parameters:
55+
56+
$$
57+
\theta_t = \theta_{t-1} - \frac{\eta}{\sqrt{\hat{v}_t} + \epsilon} \hat{m}_t
58+
$$
59+
60+
where:
61+
- $\theta$ are the model parameters
62+
- $\eta$ is the learning rate
63+
- $\beta_1$ and $\beta_2$ are the exponential decay rates for the moment estimates
64+
- $\epsilon$ is a small constant to prevent division by zero
65+
- $g_t$ is the gradient at time step $t$
66+
67+
## Implementation in Keras
68+
69+
Simple implementation of the Adam optimizer using Keras:
70+
71+
```python
72+
import numpy as np
73+
from keras.models import Sequential
74+
from keras.layers import Dense
75+
from keras.optimizers import Adam
76+
77+
# Generate data
78+
X_train = np.random.rand(1000, 20)
79+
y_train = np.random.randint(2, size=(1000, 1))
80+
81+
# Define a model
82+
model = Sequential()
83+
model.add(Dense(64, activation='relu', input_dim=20))
84+
model.add(Dense(1, activation='sigmoid'))
85+
86+
# Compile the model with Adam optimizer
87+
optimizer = Adam(learning_rate=0.001)
88+
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
89+
90+
# Train the model
91+
model.fit(X_train, y_train, epochs=50, batch_size=32)
92+
```
93+
94+
In this example:
95+
- We generate some dummy data for training.
96+
- We define a simple neural network model with one hidden layer.
97+
- We compile the model using the Adam optimizer with a learning rate of 0.001.
98+
- We train the model for 50 epochs with a batch size of 32.
99+
100+
101+
## Results
102+
103+
The results of the training process, including the loss and accuracy, will be displayed after each epoch. You can adjust the learning rate and other hyperparameters to see how they affect the training process.
104+
105+
## Advantages of Adam
106+
107+
1. **Adaptive Learning Rates**: Adam computes adaptive learning rates for each parameter, which helps in faster convergence.
108+
2. **Momentum**: Adam includes momentum, which helps in smoothing the optimization path and avoiding local minima.
109+
3. **Bias Correction**: Adam includes bias correction, improving convergence in the early stages of training.
110+
4. **Robustness**: Adam works well in practice for a wide range of problems, including those with noisy gradients or sparse data.
111+
112+
## Limitations of Adam
113+
114+
1. **Hyperparameter Sensitivity**: The performance of Adam is sensitive to the choice of hyperparameters ($\beta_1$, $\beta_2$, $\eta$), which may require careful tuning.
115+
2. **Memory Usage**: Adam requires additional memory to store the first and second moments, which can be significant for large models.
116+
3. **Generalization**: Models trained with Adam might not generalize as well as those trained with simpler optimizers like SGD in certain cases.
Lines changed: 213 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,213 @@
1+
---
2+
id: faithful-numbers
3+
title: Faithful Numbers
4+
sidebar_label: Faithful-Numbers
5+
tags:
6+
- Array
7+
- Data Structure
8+
description: "This tutorial covers the solution to the Faithful Numbers problem from the GeeksforGeeks website."
9+
---
10+
## Problem Description
11+
12+
A number is called faithful if you can write it as the sum of distinct powers of 7.
13+
e.g., `2457 = 7 + 72 + 74` . If we order all the faithful numbers, we get the sequence `1 = 70`, `7 = 71`, `8 = 70 + 71`, `49 = 72`, `50 = 70 + 72` . . . and so on.
14+
Given some value of `N`, you have to find the N'th faithful number.
15+
16+
## Examples
17+
18+
**Example 1:**
19+
20+
```
21+
Input:
22+
N = 3
23+
Output:
24+
8
25+
Explanation:
26+
8 is the 3rd Faithful number.
27+
```
28+
29+
**Example 2:**
30+
31+
```
32+
Input:
33+
N = 7
34+
Output:
35+
57
36+
Explanation:
37+
57 is the 7th Faithful number.
38+
```
39+
40+
## Your Task
41+
You don't need to read input or print anything. Your task is to complete the function `nthFaithfulNum()` which takes an Integer N as input and returns the answer.
42+
43+
44+
Expected Time Complexity: $O(log(n))$
45+
46+
Expected Auxiliary Space: $O(log(n))$
47+
48+
## Constraints
49+
50+
* `1 ≤ n ≤ 10^5`
51+
52+
## Problem Explanation
53+
A number is called faithful if you can write it as the sum of distinct powers of 7.
54+
e.g., `2457 = 7 + 72 + 74` . If we order all the faithful numbers, we get the sequence `1 = 70`, `7 = 71`, `8 = 70 + 71`, `49 = 72`, `50 = 70 + 72` . . . and so on.
55+
Given some value of `N`, you have to find the N'th faithful number.
56+
57+
58+
## Code Implementation
59+
60+
<Tabs>
61+
<TabItem value="Python" label="Python" default>
62+
<SolutionAuthor name="@Ishitamukherjee2004"/>
63+
64+
```python
65+
66+
def get_nth_faithful_number(n):
67+
faithful_numbers = []
68+
power = 0
69+
while len(faithful_numbers) < n:
70+
num = 7 ** power
71+
faithful_numbers.append(num)
72+
for i in range(len(faithful_numbers) - 1):
73+
faithful_numbers.append(num + faithful_numbers[i])
74+
power += 1
75+
return faithful_numbers[n - 1]
76+
77+
n = int(input("Enter the value of N: "))
78+
print("The {}th faithful number is: {}".format(n, get_nth_faithful_number(n)))
79+
80+
```
81+
82+
</TabItem>
83+
<TabItem value="C++" label="C++">
84+
<SolutionAuthor name="@Ishitamukherjee2004"/>
85+
86+
```cpp
87+
#include <iostream>
88+
#include <vector>
89+
#include <cmath>
90+
91+
int getNthFaithfulNumber(int n) {
92+
std::vector<int> faithfulNumbers;
93+
int power = 0;
94+
while (faithfulNumbers.size() < n) {
95+
int num = pow(7, power);
96+
faithfulNumbers.push_back(num);
97+
for (int i = 0; i < faithfulNumbers.size() - 1; i++) {
98+
faithfulNumbers.push_back(num + faithfulNumbers[i]);
99+
}
100+
power++;
101+
}
102+
return faithfulNumbers[n - 1];
103+
}
104+
int main() {
105+
int n;
106+
std::cout << "Enter the value of N: ";
107+
std::cin >> n;
108+
std::cout << "The " << n << "th faithful number is: " << getNthFaithfulNumber(n) << std::endl;
109+
return 0;
110+
}
111+
112+
```
113+
114+
</TabItem>
115+
116+
<TabItem value="Javascript" label="Javascript" default>
117+
<SolutionAuthor name="@Ishitamukherjee2004"/>
118+
119+
```javascript
120+
function getNthFaithfulNumber(n) {
121+
let faithfulNumbers = [];
122+
let power = 0;
123+
while (faithfulNumbers.length < n) {
124+
let num = Math.pow(7, power);
125+
faithfulNumbers.push(num);
126+
for (let i = 0; i < faithfulNumbers.length - 1; i++) {
127+
faithfulNumbers.push(num + faithfulNumbers[i]);
128+
}
129+
power++;
130+
}
131+
return faithfulNumbers[n - 1];
132+
}
133+
let n = parseInt(prompt("Enter the value of N:"));
134+
alert("The " + n + "th faithful number is: " + getNthFaithfulNumber(n));
135+
136+
137+
```
138+
139+
</TabItem>
140+
141+
<TabItem value="Typescript" label="Typescript" default>
142+
<SolutionAuthor name="@Ishitamukherjee2004"/>
143+
144+
```typescript
145+
146+
function getNthFaithfulNumber(n: number): number {
147+
let faithfulNumbers: number[] = [];
148+
let power: number = 0;
149+
while (faithfulNumbers.length < n) {
150+
let num: number = Math.pow(7, power);
151+
faithfulNumbers.push(num);
152+
for (let i: number = 0; i < faithfulNumbers.length - 1; i++) {
153+
faithfulNumbers.push(num + faithfulNumbers[i]);
154+
}
155+
power++;
156+
}
157+
return faithfulNumbers[n - 1];
158+
}
159+
160+
let n: number = parseInt(prompt("Enter the value of N:"));
161+
alert("The " + n + "th faithful number is: " + getNthFaithfulNumber(n));
162+
163+
164+
```
165+
166+
</TabItem>
167+
168+
<TabItem value="Java" label="Java" default>
169+
<SolutionAuthor name="@Ishitamukherjee2004"/>
170+
171+
```java
172+
import java.util.*;
173+
174+
public class Main {
175+
public static int getNthFaithfulNumber(int n) {
176+
List<Integer> faithfulNumbers = new ArrayList<>();
177+
int power = 0;
178+
while (faithfulNumbers.size() < n) {
179+
int num = (int) Math.pow(7, power);
180+
faithfulNumbers.add(num);
181+
for (int i = 0; i < faithfulNumbers.size() - 1; i++) {
182+
faithfulNumbers.add(num + faithfulNumbers.get(i));
183+
}
184+
power++;
185+
}
186+
return faithfulNumbers.get(n - 1);
187+
}
188+
public static void main(String[] args) {
189+
Scanner scanner = new Scanner(System.in);
190+
System.out.print("Enter the value of N: ");
191+
int n = scanner.nextInt();
192+
System.out.println("The " + n + "th faithful number is: " + getNthFaithfulNumber(n));
193+
}
194+
}
195+
196+
197+
```
198+
199+
</TabItem>
200+
</Tabs>
201+
202+
203+
## Solution Logic:
204+
This solution works by generating faithful numbers on the fly and storing them in a vector. It starts with the smallest faithful number, 1 (which is 7^0), and then generates larger faithful numbers by adding powers of 7 to the previously generated numbers.
205+
206+
207+
## Time Complexity
208+
209+
* The function iterates through the array once, so the time complexity is $O(n log n)$.
210+
211+
## Space Complexity
212+
213+
* The function uses additional space for the result list, so the auxiliary space complexity is $O(n)$.

0 commit comments

Comments
 (0)