Skip to content

Conversation

@helpimnotdrowning
Copy link

New argument --batches <batches: int || 4> will parallelize the splits using p-limit (3.1, compatible with node 10). This runs much faster on my machine: using the font "IBM Plex Sans JP" complete/hinted at -c 128, current takes about 1minute/split for 157 splits (took too long for me to keep measuring, est. 2h40m); using the default batch size of 4 brings it up to around 3minutes/split, taking only 44 minutes.

@changeset-bot
Copy link

changeset-bot bot commented Sep 29, 2025

⚠️ No Changeset found

Latest commit: 07fce7a

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@gemini-code-assist
Copy link

Summary of Changes

Hello @helpimnotdrowning, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a major performance enhancement by parallelizing the font splitting process. It adds a new --batches argument, allowing users to specify how many font subsets should be generated concurrently. By switching to asynchronous execution and utilizing a concurrency control library, the tool can now process multiple splits in parallel, leading to a substantial reduction in overall processing time, especially for fonts with many splits.

Highlights

  • Parallel Processing: Introduced a new command-line argument --batches <batches> (default: 4) to enable parallel processing of font splits, significantly reducing execution time for large font files.
  • Asynchronous Execution: Migrated from synchronous execFileSync to asynchronous execFile using util.promisify for external process calls, improving responsiveness and allowing for parallel operations.
  • Concurrency Control: Integrated the p-limit library to manage the concurrency of font subset generation, ensuring that a specified number of tasks run in parallel without overwhelming system resources.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces parallel processing for font subset generation, which is a great performance improvement. The implementation uses p-limit to control concurrency, which is appropriate for this task. I've made a few suggestions to improve code clarity and maintainability, such as removing unused parameters and commented-out code. I also have a suggestion to improve the progress logging to make it more accurate in a parallel execution context. Overall, this is a solid contribution.

@helpimnotdrowning
Copy link
Author

the swap-around of the logging order mentioned above was an opinionated change based on personal belief that logs should state what the program is about to do rather than what it has just done (I find it helps in debugging and tracking down issue sources).

I made this swap before I thought to publish my changes at all; if you (maintainer) would like me to put it back to before I can do that, but I think it's for the best to stay like it is anyways.

(the AI explanation is also wrong, it says that "several tasks might be logged as 'started' while they are actually waiting in a queue": limit() will wait for the whole inner function to complete, so only the currently-working tasks are logged, any queued tasks are not logged)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant