Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New feat request] Add PaLM API #27

Open
invisprints opened this issue May 12, 2023 · 6 comments
Open

[New feat request] Add PaLM API #27

invisprints opened this issue May 12, 2023 · 6 comments

Comments

@invisprints
Copy link
Contributor

I recently obtained access to the PaLM API for testing, but I found that there are hardly any client applications or products available for PaLM in the market. Therefore, I would like to know if it's possible to convert the PaLM API into OpenAI's format to integrate it with various third-party clients. I have already implemented the corresponding functionality for non-printer mode, but I have been struggling with printer mode. Can you help me resolve this issue? If you need, I can provide the API for you to debug.
Here is my code, mostly generated by GPT4 as I'm not familiar with JavaScript. Currently, it works fine in non-printer mode.

// The deployment name you chose when you deployed the model.
const deployName = 'chat-bison-001';

addEventListener("fetch", (event) => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  if (request.method === 'OPTIONS') {
    return handleOPTIONS(request)
  }

  const url = new URL(request.url);
  if (url.pathname === '/v1/chat/completions') {
    var path = "generateMessage"
  } else if (url.pathname === '/v1/completions') {
    var path = "generateText"
  } else {
    return new Response('404 Not Found', { status: 404 })
  }

  let body;
  if (request.method === 'POST') {
    body = await request.json();
  }

  const authKey = request.headers.get('Authorization');
  if (!authKey) {
    return new Response("Not allowed", {
      status: 403
    });
  }

  // Remove 'Bearer ' from the start of authKey
  const apiKey = authKey.replace('Bearer ', '');

  const fetchAPI = `https://generativelanguage.googleapis.com/v1beta2/models/${deployName}:${path}?key=${apiKey}`


  // Transform request body from OpenAI to PaLM format
  const transformedBody = {
    prompt: {
      messages: body?.messages?.map(msg => ({
        author: msg.role === 'user' ? '0' : '1',
        content: msg.content,
      })),
    },
  };

  const payload = {
    method: request.method,
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify(transformedBody),
  };

  const response = await fetch(fetchAPI, payload);
  const palmData = await response.json();

  // Transform response from PaLM to OpenAI format
  const transformedResponse = transformResponse(palmData);

  if (body?.stream != true){
      return new Response(JSON.stringify(transformedResponse), {
        headers: { 'Content-Type': 'application/json' },
      });
    } else {
      // add stream output
    }
}

// Function to transform the response
function transformResponse(palmData) {
  return {
    id: 'chatcmpl-' + Math.random().toString(36).substring(2), // Generate a random id
    object: 'chat.completion',
    created: Math.floor(Date.now() / 1000), // Current Unix timestamp
    model: 'gpt-3.5-turbo', // Static model name
    usage: {
      prompt_tokens: palmData.messages.length, // This is a placeholder. Replace with actual token count if available
      completion_tokens: palmData.candidates.length, // This is a placeholder. Replace with actual token count if available
      total_tokens: palmData.messages.length + palmData.candidates.length, // This is a placeholder. Replace with actual token count if available
    },
    choices: palmData.candidates.map((candidate, index) => ({
      message: {
        role: 'assistant',
        content: candidate.content,
      },
      finish_reason: 'stop', // Static finish reason
      index: index,
    })),
  };
}

async function handleOPTIONS(request) {
    return new Response(null, {
      headers: {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Methods': '*',
        'Access-Control-Allow-Headers': '*'
      }
    })
}
@haibbo
Copy link
Owner

haibbo commented May 12, 2023

Great Job!

The PaLM API probably doesn't support stream mode, right? I didn't see it in the document, and when I asked questions on Brad, I didn't see any printer effects. So, I wouldn't recommend you to implement the printer mode in this case since you have got all response.

I don't have the API permissions and not familiar with it, so I might be wrong.

@haibbo
Copy link
Owner

haibbo commented May 12, 2023

If you have any other questions or need further assistance, could you possibly lend me your key temporarily? I would only use it for proxy development.

My Email: [email protected].

@Caixiaopig
Copy link
Contributor

我部署了当前版本的 cf-openai-palm-proxy.js,一直提示 500 Error。

@invisprints
Copy link
Contributor Author

英文 + 美国 IP 能解决大部分问题

@invisprints
Copy link
Contributor Author

@Caixiaopig 你可以尝试 https://github.com/invisprints/cf-openai-azure-proxy/blob/main/cf-openai-palm-proxy.js ,这个应该能返回错误信息,但我没有充分测试过

@Caixiaopig
Copy link
Contributor

@invisprints 你的脚本能提示错误信息就方便很多,cf workers的归属IP地址会根据请求IP来分流,确实要换到美国的线路去访问cf workers的域名。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants