Zig NEWS

Discussion on: Re: Makin' wavs with Zig

Collapse
 
webermartin profile image
weber-martin

I don't understand the benefit of leaving off the types of the const's. Fine, now they are comptime; you drop one cast (SAMPLE_RATE; saying a comptime_int can be treated as comptime_float and comptime_int), but then you have to re-add the cast in another place (to make sure the SAMPLE_RATE is indeed being treated as float). It is a bit non-obvious to me why the 2nd place asked for the cast while you could drop it in the first place. Both places have the comptime_int as divisor with the dividend being a float already, aren't they?

Also I like to state expected bit-widths surrounding my I/O - involved data types for binary I/O. If you have a u32 and write a single byte because that's all the value needs for the given value (e.g., write a u32: 127 as u7: 127 aka 1 byte), nobody but your attention will catch it. Similarly with the repetitive explicit type in writeIntLittle: at places you convert, at others you don't. It is non-obvious from the language where you are using a conversion and where you aren't.

Finally, I disagree about the usage of @truncate -- this is used to select a single byte from the multi-byte constant; a perfectly valid use-case that shall happily ignore non-zero bits in the rest of the bit-field. What is your actual problem with the usage of @truncate in this use-case?

TIA for enlightenment.

Collapse
 
xq profile image
Felix "xq" Queißner

@truncate will silently remove upper bits, which means that when you do arithmetic like in try file.writeIntLittle(u16, (CHANNELS * (BIT_DEPTH / BYTE_SIZE))); will just compile, even if it tries to put 65536 into the var (the program will silently emit 0) whereas without a truncate and no explicit types, the computation would still yield 65536, but with this error:

demo.zig:6:31: error: type 'u16' cannot represent integer value '102400'
    const foo: u16 = CHANNELS * (BIT_DEPTH / BYTE_SIZE);
                     ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
Enter fullscreen mode Exit fullscreen mode

Which is unarguably better than having the program compile, and emit invalid files for weeks before someone notices it. Also computing values with comptime_int is way easier than using fixed types. For example, with (3 * a / 4) you can compute a precise 75% value of that integer a, but if you're using a: u16 and a is 40000, it will overflow:

demo2.zig:3:14: error: overflow of integer type 'u16' with value '120000'
const b = (3 * a / 4);
           ~~^~~
Enter fullscreen mode Exit fullscreen mode

whereas using a comptime_int value, it will just work.

Collapse
 
webermartin profile image
weber-martin

Thanks for your explanations!

The thing I still don't get is the following: comptime_int will allow you to do computations with results or intermediate results out of scope of the ultimate goal type (in our case, u16). You will notice that at the point where you finally leave comptime_int realm and try to fit the value into u16. You will not notice otherwise.

What is the harm in trying to provoke the error as early as possible instead of deferring it to the latest possible point (when you finally explicitly coerce to u16)?

Isn't it advisable to trigger an error as early as possible, hence decorate your const's with the target type? You've given an example of scaling down by 3/4, what if I need/want to store the intermediate result as well and drive further (comptime) logic with it? In the end, I suppose, things wouldn't compile; I just don't see the benefit of not stating explicitly the domain of values I intend a const (or comptime var, for that matter) to take.

You seem to "sell" this with "I can go temporarily out of domain with comptime_int" where I wouldn't "buy" this exactly because of that "feature".

Again, I think in the end it doesn't matter this way or another, because sooner or later (without @truncate) you'll run into an error - but you say it is more idiomatic to allow temporary values to be out of domain, which is the thing that confuses me.

Thread Thread
 
xq profile image
Felix "xq" Queißner

Just to clarify my thinking: Imho constants should be untyped as their implicit type is actually a integer that only allows a single value (which is the value of the constant) and thus can be stored in 0 bits. I guess when Zig gets the @Int(min,max) type, a constant x will just be of the type @Int(x,x), which can be stored in 0 bits, but has a value that can exceed any reasonably storable number. This allows assigning a constant x=7 to both a @Int(0,7) as well as to a @Int(7,14)