GL_SRGB GL_SRGB8 GL_SRGB_ALPHA GL_SRGB8_ALPHA8
Could you please explain why there are no sRGB/sRGBA equivalents for formats with other bits precision, for example
They don't exist because they're unnecessary.
sRGB can be thought of as a form of compression. By using colors in the sRGB colorspace, you can use the 8-bits-per-component more efficiently, as it essentially distributes more bits to the lighter areas of your scene. If you're using 16-bits-per-component, such compression is unnecessary; you have plenty of bits of precision.
But the deeper reason has to do with hardware implementation. When a GPU reads data from a texture via a texture fetch operation (say, GLSL's
texture function), it needs to do some conversion of those texel values to useful values within the shader. RGBA8 stores unsigned normalized integers, so it must map the [0, 255] data to the [0, 1.0] floating-point range. That's easily done with a few bitwise operations.
Converting sRGB colorspace values to the [0, 1.0] linear range is much more complicated. But there's an easy way out: a simple lookup table. After all, at 8-bits-per-component, you only need 256 entries.
But if you had to support 16-bits-per-component sRGB, you'd need 65'536 entries. That's a lot more. And given the point above, it's just not worthwhile.
As for the smaller component formats, there's just no point. Such formats are a form of compression; you would never use them if you weren't trying to save storage. But we have better forms of image compression: the various block compression formats. They provide for better image compression (4bpp or less, relative to 32bpp), better image quality compared to 16bpp, or both.
And they already support sRGB. So there's no reason to provide this functionality for inferior compressed formats.