Zig NEWS

Luke Harwood
Luke Harwood

Posted on

openai-proxz - An intuitive OpenAI library for zig!

https://github.com/lukeharwood11/openai-proxz

I wanted a simple interface for interacting with OpenAI & compatible APIs and couldn't find one that was MIT licensed and carried the features I needed, so I built one!

As someone who is coming from python, I loved how simple the openai-python package was, so this was modeled after that interface.

📙 ProxZ Docs: https://proxz.mle.academy

Features

  • Built-in retry logic
  • Environment variable config support for API keys, org. IDs, project IDs, and base urls
  • Integration with the most popular OpenAI endpoints with a generic request method for missing endpoints

Installation

To install proxz, run

 zig fetch --save "git+https://github.com/lukeharwood11/openai-proxz"
Enter fullscreen mode Exit fullscreen mode

And add the following to your build.zig

const proxz = b.dependency("proxz", .{
    .target = target,
    .optimize = optimize,
});

exe.root_module.addImport("proxz", proxz.module("proxz"));
Enter fullscreen mode Exit fullscreen mode

Usage

Client Configuration

const proxz = @import("proxz");
const OpenAI = proxz.OpenAI;
Enter fullscreen mode Exit fullscreen mode
// make sure you have an OPENAI_API_KEY environment variable set,
// or pass in a .api_key field to explicitly set!
var openai = try OpenAI.init(allocator, .{});
defer openai.deinit();
Enter fullscreen mode Exit fullscreen mode

Since OpenAI was one of the first large LLM providers, others modeled their APIs around their contracts! So you can use other providers by setting the OPENAI_BASE_URL environment variable or adjusting the config:

var openai = try OpenAI.init(allocator, .{
    .api_key = "my-groq-api-key",
    .base_url = "https://api.groq.com/openai/v1",
    .max_retries = 5,
});
defer openai.deinit();
Enter fullscreen mode Exit fullscreen mode

Chat Completions

const ChatMessage = proxz.ChatMessage;

var response = try openai.chat.completions.create(.{
    .model = "gpt-4o",
    .messages = &[_]ChatMessage{
        .{
            .role = "user",
            .content = "Hello, world!",
        },
    },
});
// This will free all the memory allocated for the response
defer response.deinit();
const completions = response.data;
std.log.debug("{s}", .{completions.choices[0].message.content});
Enter fullscreen mode Exit fullscreen mode

Embeddings

const inputs = [_][]const u8{ "Hello", "Foo", "Bar" };
const embeddings_response = try openai.embeddings.create(.{
    .model = "text-embedding-3-small",
    .input = &inputs,
});
// Don't forget to free resources!
defer embeddings_response.deinit();
const embeddings = embeddings_response.data;
std.log.debug("Model: {s}\nNumber of Embeddings: {d}\nDimensions of Embeddings: {d}", .{
    embeddings.model,
    embeddings.data.len,
    embeddings.data[0].embedding.len,
});
Enter fullscreen mode Exit fullscreen mode

Contributions

Contributions are welcome and encouraged! Submit an issue for any bugs/feature requests and open a PR if you tackled one of them!

Top comments (0)