Skip to content

Update publication with internship2 #64

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion src/components/PublicationFeatures/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,14 @@ export function MiscItem({link, customName="Misc"}) {
);
}

export function LeaderboardItem({link, customName="Leaderboard"}) {
return (
<p className={styles.leaderboard}>
<a href={link} target="_blank" rel="noopener noreferrer">{customName}</a>
</p>
);
}



export default {
Expand All @@ -106,5 +114,6 @@ export default {
PaperDescription: PaperDescription,
GithubItem: GithubItem,
DemoItem: DemoItem,
MiscItem: MiscItem
MiscItem: MiscItem,
LeaderboardItem: LeaderboardItem,
}
4 changes: 2 additions & 2 deletions src/pages/careers.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -131,13 +131,13 @@ Brain팀은 시차출퇴근제를 진행하고 있습니다.

Brain팀 구성원에게는 **입사 시 GPU 탑재 데스크탑부터 MacBook까지 원하는 기기를 지원**해드리며, **모니터 및 모니터암을 기본으로 제공**하여 Brain팀 구성원분들의 목 건강도 책임집니다! 💪

Brain팀에는 **연구용으로만 On-premise로 V100 10대, A100 30대, H100 64대 이상을 운용**하고 있고, 인원당 On-premise GPU를 최소 2대 이상 사용하실 수 있게끔 연구 및 개발환경을 구축하고 있습니다. (2024년 06월 기준)
Brain팀에는 **연구용으로만 On-premise로 V100 10대, A100 30대, H100 96대 이상을 운용**하고 있고, 인원당 On-premise GPU를 최소 2대 이상 사용하실 수 있게끔 연구 및 개발환경을 구축하고 있습니다. (2024년 12월 기준)

<img className={styles.figCenter} src={figBrainRoom} alt="brain-room" />

<br/>

Brain팀 AI Scientist는 회사의 사업 방향에 따른 연구인 **Strategic Research**와 개인 관심 주제에 따른 자유 연구인 **General Research**를 **일과 중 절반씩 진행**하는 것을 권장하는 등 최적의 연구 환경을 마련해드리기 위해 노력하고 있습니다.
Brain팀 AI Scientist는 회사의 사업 방향에 따른 연구인 **Strategic Research**와 ~~개인 관심 주제에 따른 자유 연구인 **General Research**를~~(회사 사정에 따라 현재는 중단) **일과 중 절반씩 진행**하는 것을 권장하는 등 최적의 연구 환경을 마련해드리기 위해 노력하고 있습니다.

**NeurIPS, ICLR, CVPR, ECCV, Interspeech, ACL 등 학회 참석 및 학회 논문 제출**을 희망하실 경우, 적극적으로 지원해드리고 있습니다! 💸 (현재까지 채택된 논문 현황에 대해서는 **[Publications](/publications)** 탭을 참고해주세요!)
<br/>
Expand Down
4 changes: 2 additions & 2 deletions src/pages/internship-season1.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: maum.ai Brain팀 채용
description: maum.ai Brain팀에 지원하세요!
title: maum.ai Brain팀 체험형 인턴 채용
description: maum.ai Brain팀 체험형 인턴에 지원하세요!
image: img/maumai_Symbol.png
---

Expand Down
8 changes: 4 additions & 4 deletions src/pages/internship-season2.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: maum.ai Brain팀 채용
description: maum.ai Brain팀에 지원하세요!
title: maum.ai Brain팀 체험형 인턴 채용
description: maum.ai Brain팀 체험형 인턴에 지원하세요!
image: img/maumai_Symbol.png
---

Expand Down Expand Up @@ -52,7 +52,7 @@ ChatGPT, Copilot… 요즘 AI 없이는 살기 힘든 시대죠?
MaumAI의 ML Engineer 팀은 이런 LLM을 더 빠르고, 더 효율적으로 서빙하는 방법을 연구하는 사람들이 모인 팀입니다.
그 중에서도 on-device AI는 가장 뜨거운 주제!
우리는 최신 스마트폰과 edge device에서 LLM을 서빙하는 최적의 방법을 찾고, 더 적은 자원으로 더 큰 성능을 끌어내는 기술을 연구하고 있어요.
:위를_가리키는_손_모양: 이 글도 사실 ChatGPT의 도움을 받았지만,
☝️ 이 글도 사실 ChatGPT의 도움을 받았지만,
이걸 진짜 세상에서 실현하는 건 바로 당신의 손에 달려 있습니다!

##### 이런 일을 함께 할 거예요
Expand Down Expand Up @@ -181,7 +181,7 @@ MaumAI의 ML Engineer 팀은 이런 LLM을 더 빠르고, 더 효율적으로
- 최종 합격자 발표: 12월 26일 예정
- 전형 절차: 서류 전형 → 기술진 면접(화상) → 최종 합격자 발표

인턴 기간: 여름방학 약 2개월
인턴 기간: 겨울방학 약 2개월
(12월 30일~3월 초, 약 2개월, 지원자의 학사일정에 따라 달라질 수 있음)

전형절차는 변동될 수 있으며, 기술진 면접 결과에 따라 임원진 면접 절차가 추가될 수 있습니다.<br/>
Expand Down
21 changes: 9 additions & 12 deletions src/pages/internship.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: maum.ai Brain팀 채용
description: maum.ai Brain팀에 지원하세요!
title: maum.ai Brain팀 체험형 인턴 채용
description: maum.ai Brain팀 체험형 인턴에 지원하세요!
image: img/maumai_Symbol.png
---

Expand All @@ -19,19 +19,16 @@ import figGPU from './image/h100-gpu.png';

<br/>

현재 2024년 겨울방학 Brain팀 체험형 인턴 2기 모집 중에 있습니다. 기존에 진행하던 NLP 주제에서 확장해, Audio, MLE 2개의 주제가 이번년도에는 새로 생겼습니다. 자세한 내용은 아래 링크를 참고해주세요.

<div className={styles.buttons}>
<Link
className="button button--primary button--lg"
color="red"
to="/internship-season2">
Brain팀 체험형 인턴 2기 지원하러 가기✍
</Link>
</div>
현재 2024년 겨울방학 Brain팀 체험형 인턴 2기 진행 중입니다. <br/>

## 과거 지원페이지

> 2024년 겨울방학 Brain팀 체험형 인턴 2기<br/>
> NLP, Audio, MLE 주제를 중심으로 진행되었습니다.<br/>
> 경쟁률 **95:5**, 최종 합격 총 5명<br/>
> [더 자세히 알아보기](/internship-season2)

> 2024년 여름방학 Brain팀 체험형 인턴 1기<br/>
> NLP (LLM) 주제를 중심으로 진행되었습니다.<br/>
> 경쟁률 **33:3**, 최종 합격 track당 1명씩 총 3명<br/>
> [더 자세히 알아보기](/internship-season1)
6 changes: 6 additions & 0 deletions src/pages/open-source.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,12 @@ import * as features from '@site/src/components/OpenSourceFeatures';

<section id="activities" className={styles.category}>
<ul className={styles.repositories}>
<li>
{/* <features.StarItem userName="maum-ai" repoName="KOFFVQA" /> */}
<features.StarItem userName="maum-ai" repoName="KOFFVQA" />
<features.GithubLinkItem userName="maum-ai" repoName="KOFFVQA" repoNickname="KOFFVQA" />
<features.PaperLinkItem paperLink="https://arxiv.org/abs/2503.23730" title="KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean Language" />
</li>
<li>
{/* <features.StarItem userName="worv-ai" repoName="CANVAS" /> */}
<features.StarItem userName="worv-ai" repoName="CANVAS" />
Expand Down
19 changes: 17 additions & 2 deletions src/pages/publications.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,33 @@ import * as features from '@site/src/components/PublicationFeatures';
<!-- ![maum.ai Logo](assets/maumai_BI.png) -->
## Publications

### 2024
### 2025
<section id="activities" className={styles.category}>
<ul className={styles.publications}>
<li>
<features.ConferenceItem conference="ICRA Under Review"/>
<features.ConferenceItem conference="CVPR Workshop"/>
<features.PaperTitle paperLink="https://arxiv.org/abs/2503.23730" title="KOFFVQA: An Objectively Evaluated Free-form VQA Benchmark for Large Vision-Language Models in the Korean Language"/>
<features.AuthorItem authors={["Yoonshik Kim", "Jaeyoon Jung"]} numFirstAuthor={4} isBrainTeam={[true, true]}/>
<features.PaperDescription preview="The recent emergence of Large Vision-Language Models(VLMs) has resulted in a variety of different benchmarks for evaluating such models. "
description="Despite this, we observe that most existing evaluation methods suffer from the fact that they either require the model to choose from pre-determined responses, sacrificing open-endedness, or evaluate responses using a judge model, resulting in subjective and unreliable evaluation. In addition, we observe a lack of benchmarks for VLMs in the Korean language, which are necessary as a separate metric from more common English language benchmarks, as the performance of generative language models can differ significantly based on the language being used. Therefore, we present KOFFVQA, a general-purpose free-form visual question answering benchmark in the Korean language for the evaluation of VLMs. Our benchmark consists of 275 carefully crafted questions each paired with an image and grading criteria covering 10 different aspects of VLM performance. The grading criteria eliminate the problem of unreliability by allowing the judge model to grade each response based on a pre-determined set of rules. By defining the evaluation criteria in an objective manner, even a small open-source model can be used to evaluate models on our benchmark reliably. In addition to evaluating a large number of existing VLMs on our benchmark, we also experimentally verify that our method of using pre-existing grading criteria for evaluation is much more reliable than existing methods. Our evaluation code is available at https://github.com/maum-ai/KOFFVQA."/>
<features.GithubItem link="https://github.com/maum-ai/KOFFVQA" />
<features.LeaderboardItem link="https://huggingface.co/spaces/maum-ai/KOFFVQA-Leaderboard" />
</li>
<li>
<features.ConferenceItem conference="ICRA"/>
<features.PaperTitle paperLink="https://arxiv.org/abs/2410.01273" title="CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction"/>
<features.AuthorItem authors={["Suhwan Choi", "Yongjun Cho", "Minchan Kim", "Jaeyoon Jung", "Myunchul Joe", "Yubeen Park", "Minseo Kim", "Sungwoong Kim", "Sungjae Lee", "Hwiseong Park", "Jiwan Chung", "Youngjae Yu"]} numFirstAuthor={4} isBrainTeam={[true, true, true, true, true, false, false, false, false, false, false, false]}/>
<features.PaperDescription preview="Real-life robot navigation involves more than just reaching a destination; it requires optimizing movements while addressing scenario-specific goals. "
description="An intuitive way for humans to express these goals is through abstract cues like verbal commands or rough sketches. Such human guidance may lack details or be noisy. Nonetheless, we expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they must share a common understanding of basic navigation concepts with humans. To this end, we introduce CANVAS, a novel framework that combines visual and linguistic instructions for commonsense-aware navigation. Its success is driven by imitation learning, enabling the robot to learn from human navigation behavior. We present COMMAND, a comprehensive dataset with human-annotated navigation results, spanning over 48 hours and 219 km, designed to train commonsense-aware navigation systems in simulated environments. Our experiments show that CANVAS outperforms the strong rule-based system ROS NavStack across all environments, demonstrating superior performance with noisy instructions. Notably, in the orchard environment, where ROS NavStack records a 0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Furthermore, real-world deployment of CANVAS showcases impressive Sim2Real transfer with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications."/>
<features.GithubItem link="https://github.com/worv-ai/canvas" />
<features.DemoItem link="https://worv-ai.github.io/canvas/" />
</li>
</ul>
</section>

### 2024
<section id="activities" className={styles.category}>
<ul className={styles.publications}>
<li>
<features.ConferenceItem conference="NeurIPS Workshop OWA (Oral)"/>
<features.PaperTitle paperLink="https://openreview.net/forum?id=U6wyOnPt1U" title="Integrating Visual and Linguistic Instructions for Context-Aware Navigation Agents"/>
Expand Down