https://github.com/lukeharwood11/openai-proxz
I wanted a simple interface for interacting with OpenAI & compatible APIs and couldn't find one that was MIT licensed and carried the features I needed, so I built one!
As someone who is coming from python, I loved how simple the openai-python
package was, so this was modeled after that interface.
📙 ProxZ Docs: https://proxz.mle.academy
Features
- Built-in retry logic
- Environment variable config support for API keys, org. IDs, project IDs, and base urls
- Integration with the most popular OpenAI endpoints with a generic
request
method for missing endpoints - Streamed responses
- Logging configurability
Installation
To install proxz
, run
zig fetch --save "git+https://github.com/lukeharwood11/openai-proxz"
And add the following to your build.zig
const proxz = b.dependency("proxz", .{
.target = target,
.optimize = optimize,
});
exe.root_module.addImport("proxz", proxz.module("proxz"));
Usage
Client Configuration
const proxz = @import("proxz");
const OpenAI = proxz.OpenAI;
// make sure you have an OPENAI_API_KEY environment variable set,
// or pass in a .api_key field to explicitly set!
var openai = try OpenAI.init(allocator, .{});
defer openai.deinit();
Chat Completions
Regular
const ChatMessage = proxz.ChatMessage;
var response = try openai.chat.completions.create(.{
.model = "gpt-4o",
.messages = &[_]ChatMessage{
.{
.role = "user",
.content = "Hello, world!",
},
},
});
// This will free all the memory allocated for the response
defer response.deinit();
std.log.debug("{s}", .{response.choices[0].message.content});
Streamed Response
var stream = try openai.chat.completions.createStream(.{
.model = "gpt-4o-mini",
.messages = &[_]ChatMessage{
.{
.role = "user",
.content = "Write me a poem about lizards. Make it a paragraph or two.",
},
},
});
defer stream.deinit();
std.debug.print("\n", .{});
while (try stream.next()) |val| {
std.debug.print("{s}", .{val.choices[0].delta.content});
}
std.debug.print("\n", .{});
Embeddings
const inputs = [_][]const u8{ "Hello", "Foo", "Bar" };
const response = try openai.embeddings.create(.{
.model = "text-embedding-3-small",
.input = &inputs,
});
// Don't forget to free resources!
defer response.deinit();
std.log.debug("Model: {s}\nNumber of Embeddings: {d}\nDimensions of Embeddings: {d}", .{
response.model,
response.data.len,
response.data[0].embedding.len,
});
Models
Get model details
var response = try openai.models.retrieve("gpt-4o");
defer response.deinit();
std.log.debug("Model is owned by '{s}'", .{response.owned_by});
List all models
var response = try openai.models.list();
defer response.deinit();
std.log.debug("The first model you have available is '{s}'", .{response.data[0].id})
Configuring Logging
By default all logs are enabled for your entire application.
To configure your application, and set the log level for proxz
, include the following in your main.zig
.
pub const std_options = std.Options{
.log_level = .debug, // this sets your app level log config
.log_scope_levels = &[_]std.log.ScopeLevel{
.{
.scope = .proxz,
.level = .info, // set to .debug, .warn, .info, or .err
},
},
};
All logs in proxz
use the scope .proxz
, so if you don't want to see debug/info logs of the requests being sent, set .level = .err
. This will only display when an error occurs that proxz
can't recover from.
Contributions
Contributions are welcome and encouraged! Submit an issue for any bugs/feature requests and open a PR if you tackled one of them!
Top comments (0)