Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Fireworks models in the example collab notebook throws an OpenAI API error #1108

Closed
Ajacmac opened this issue Feb 4, 2025 · 5 comments

Comments

@Ajacmac
Copy link
Contributor

Ajacmac commented Feb 4, 2025

Describe the bug
This could easily be user error, but I'm trying to run the collab notebook using the Fireworks API rather than the OpenAI API and I'm getting the following error:

⚠️ Error in reading JSON, attempting to repair JSON
Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
---------------------------------------------------------------------------
AuthenticationError                       Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/gpt_researcher/actions/agent_creator.py](https://localhost:8080/#) in choose_agent(query, cfg, parent_query, cost_callback, headers)
     26     try:
---> 27         response = await create_chat_completion(
     28             model=cfg.smart_llm_model,

22 frames
AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: dummy_key. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
[/usr/lib/python3.11/re/__init__.py](https://localhost:8080/#) in search(pattern, string, flags)
    174     """Scan through string looking for a match to the pattern, returning
    175     a Match object, or None if no match was found."""
--> 176     return _compile(pattern, flags).search(string)
    177 
    178 def sub(pattern, repl, string, count=0, flags=0):

TypeError: expected string or bytes-like object, got 'NoneType'

To Reproduce
Steps to reproduce the behavior:

  1. Open the collab link in the readme
  2. Add the lines of code from https://docs.gptr.dev/docs/gpt-researcher/llms/llms for the Fireworks models
  3. Modify the fast/smart/strategic lines to use R1 and V3 instead of Mixtral
  4. Add the langchain-fireworks package like the documentation mentions on https://docs.gptr.dev/docs/gpt-researcher/llms/supported-llms
  5. Add the dummy OpenAI API key since I saw that in the documentation once or twice (if there's no OpenAI API key I got a different error saying it needed an OpenAI API key instead)
  6. Run the code blocks in the notebook top to bottom, as normal. The third block triggers the error.

Expected behavior
I expected GPT-Researcher to work and use the fireworks model API's.

Screenshots

Image

Image

Image

Additional context
There's no desktop, etc. information since this is in collab.

There are probably instructions I'm missing, but the link to the config page in the docs as of now is dead.
https://docs.gptr.dev/gpt-researcher/config

Image

@mahdijafaridev
Copy link

I have the same issue. I tried installing with pip package and cloning the gpt researcher but no matter what I get the same issue.

@danieldekay
Copy link
Contributor

Can you move the parameters like SMART_LLM etc. into your environment, like you do with the API keys?
Does that work?

@assafelovic
Copy link
Owner

Hey @Ajacmac this is the correct config url: https://docs.gptr.dev/docs/gpt-researcher/gptr/config

Where is this url from?

@Ajacmac
Copy link
Contributor Author

Ajacmac commented Feb 7, 2025

@assafelovic The link is from this page, specifically the "here" at the bottom.

Image

I made a PR to fix it.

@danieldekay I'll try that now.

@Ajacmac
Copy link
Contributor Author

Ajacmac commented Feb 7, 2025

@danieldekay That fixed the problem. There's a new error, but it's not clear to me that it's related to this issue.

Immediately after this line:
INFO: [17:12:10] 📚 Getting relevant content based on query: Nvidia financial performance Q4 2024 and 2025 revenue forecasts...
It will throw this error message

ERROR:research:Error processing sub-query Should I invest in Nvidia?: Number of rows exceed limit of 256
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/gpt_researcher/skills/researcher.py", line 270, in _process_sub_query
    content = await self.researcher.context_manager.get_similar_content_by_query(sub_query, scraped_data)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/gpt_researcher/skills/context_manager.py", line 26, in get_similar_content_by_query
    return await context_compressor.async_get_context(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/gpt_researcher/context/compression.py", line 71, in async_get_context
    relevant_docs = await asyncio.to_thread(compressed_docs.invoke, query)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/futures.py", line 287, in __await__
    yield self  # This tells Task to wait for completion.
    ^^^^^^^^^^
  File "/usr/lib/python3.11/asyncio/tasks.py", line 349, in __wakeup
    future.result()
  File "/usr/lib/python3.11/asyncio/futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_core/retrievers.py", line 259, in invoke
    result = self._get_relevant_documents(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain/retrievers/contextual_compression.py", line 48, in _get_relevant_documents
    compressed_docs = self.base_compressor.compress_documents(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain/retrievers/document_compressors/base.py", line 39, in compress_documents
    documents = _transformer.compress_documents(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain/retrievers/document_compressors/embeddings_filter.py", line 73, in compress_documents
    embedded_documents = _get_embeddings_from_stateful_docs(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_community/document_transformers/embeddings_redundant_filter.py", line 71, in _get_embeddings_from_stateful_docs
    embedded_documents = embeddings.embed_documents(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/langchain_fireworks/embeddings.py", line 103, in embed_documents
    for i in self.client.embeddings.create(input=texts, model=self.model).data
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/openai/resources/embeddings.py", line 125, in create
    return self._post(
           ^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/openai/_base_client.py", line 1283, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/openai/_base_client.py", line 960, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/openai/_base_client.py", line 1064, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Number of rows exceed limit of 256

Let me know if you want me to make a new issue.

@Ajacmac Ajacmac closed this as completed Feb 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants