[WebGPU] maxAnisotropy > 16 is clamped, rather than illegal
Created attachment 455062 [details] Patch
Comment on attachment 455062 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=455062&action=review > Source/WebGPU/WebGPU/Sampler.mm:195 > + samplerDescriptor.maxAnisotropy = std::min(descriptor.maxAnisotropy, static_cast<uint16_t>(16)); sometimes less tokens for the human to parse, less ambiguous(not sure it compiles without errors for us, maybe?) std::min<uint16_t>(descriptor.maxAnisotropy, 16);
Comment on attachment 455062 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=455062&action=review > Source/WebGPU/ChangeLog:12 > + Covered by api/operation/sampling/anisotropy.spec.ts
Committed r291593 (248687@trunk): <https://commits.webkit.org/248687@trunk>
<rdar://problem/90605401>
Comment on attachment 455062 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=455062&action=review >> Source/WebGPU/WebGPU/Sampler.mm:195 >> + samplerDescriptor.maxAnisotropy = std::min(descriptor.maxAnisotropy, static_cast<uint16_t>(16)); > > sometimes less tokens for the human to parse, less ambiguous(not sure it compiles without errors for us, maybe?) > std::min<uint16_t>(descriptor.maxAnisotropy, 16); I don’t absolutely love this idiom, because when I read std::min<uint16_t>(descriptor.maxAnisotropy, 16) I think "is maxAnisotropy bigger than 16-bit, because if it is, this thing will chop the high bits". Because of that I would write the less terse: constexpr uint16_t maxMaxAnistropy = 16; samplerDescriptor.maxAnisotropy = std::min(descriptor.maxAnisotropy, maxMaxAnistropy);