FEATURE REQUEST: "shader" type and ability to "call" shaders


Master Zap <zap.an...@...>
 

Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z


Master Zap <zap.an...@...>
 

Another example that came up today when I was making a fake raymarching laser beam, was that any amospheric shading in my fake volume have to be "in" the shader. It would be so much nicer if I could just connect any shade tree to my "density" input and have the raymarching shader just sample that at points it wanted....

(Attaching a glowing cube for entertainment) :)

/Z


On Sunday, July 1, 2018 at 8:07:20 AM UTC+2, Master Zap wrote:
Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z


Changsoo Eun <chang...@...>
 

EXACTLY!

This is what I was trying to say in my previous attempt..
Seeing this request coming from Zap make me very very very very happy.


On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:
Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z


Master Zap <zap.an...@...>
 

Actually we don't need a new type, just a new function, evaluate(), which just takes a regular input value, but evaluates it in a context with overriden globals/inputs.

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    color ColorToBlur = 0.0,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += evaluate(ColorToBlur, "u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}


Olivier Paquet <olivie...@...>
 

Le dimanche 1 juillet 2018 02:07:20 UTC-4, Master Zap a écrit :
I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

It's not just "theoretical purity". This kind of pattern:

- mess with global variables
- run piece of code which uses global variables
- mess again with global variables
- run piece of code which uses global variables

goes against the last 30-40 years of software engineering wisdom. I get that it looks like a quick and painless short-term solution to a problem. But it feels as wrong as the guy walking off from the group to explore a dark corridor alone in the Nth Alien sequel. Just because we're a bit behind times in the CG world does not mean we should keep doing things the wrong way.

Besides, u and v are surface parametric coordinates, not texture coordinates. So in our renderer at least it would be useless to override them as far as moving textures goes. Worse, it would make dPdu, dPdv incoherent. And screw up attribute lookups which need u,v to evaluate the attribute. etc.

I get that there's a need for some nicer mechanism to do things like texture projections but I don't feel like this should be it.

Olivier


Master Zap <zap.an...@...>
 

On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)

Think about it as using shader connections as pluggable subroutines, if you will. 

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...

/Z



On Tuesday, July 3, 2018 at 4:27:28 PM UTC+2, Olivier Paquet wrote:
Le dimanche 1 juillet 2018 02:07:20 UTC-4, Master Zap a écrit :
I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

It's not just "theoretical purity". This kind of pattern:

- mess with global variables
- run piece of code which uses global variables
- mess again with global variables
- run piece of code which uses global variables

goes against the last 30-40 years of software engineering wisdom. I get that it looks like a quick and painless short-term solution to a problem. But it feels as wrong as the guy walking off from the group to explore a dark corridor alone in the Nth Alien sequel. Just because we're a bit behind times in the CG world does not mean we should keep doing things the wrong way.

Besides, u and v are surface parametric coordinates, not texture coordinates. So in our renderer at least it would be useless to override them as far as moving textures goes. Worse, it would make dPdu, dPdv incoherent. And screw up attribute lookups which need u,v to evaluate the attribute. etc.

I get that there's a need for some nicer mechanism to do things like texture projections but I don't feel like this should be it.

Olivier


Zap Andersson <z...@...>
 

Let me take a more real-world example where this is useful.

In 3ds Max, there is a shader called Randomized Bitmap, which scatters a set of images across an object randomly.

Here's a (bad) video of it in action:


It takes up to 10 images, and randomly splatters them across a surface, alfa-blending them on top of each other, looking the images up with different rotations, scaling, whatnot.

(In practice it is using a set of grid cells, using cellnoise for each of the cells to compute randomized modificatons for the bitmaps within that cell, including moving actually out of the cell into neighbouring cells)

There is also a way to drive the probability of if a bitmap should show up or not in its cell, driven by another cellnoise from that cell.

Here are the problems with this:


PROBLEM #1: This only lets me scatter bitmaps (images) around. That's very limiting. 

What if I wanted to scatter a procedurally generated thing around, or even scatter an image with a color correction operation on it? That's impossible with OSL. 

The only thing the user can supply the shader for the shader to "look up" into is an image file. What I want, is the ability to supply anything procedurally driven, and have that "looked up" by the shader equally.

Actually creating the same visual effects with pure DAG shaders might be possible with some insane spaghetti setup, but you would have to break out texture-coordinate generating shaders separately, and wire them all into some kind of switchers that picks which subshader to actuall drive and.... since N number of lookups may happen at the same sample... I'm not even sure it is possible, and that any mere human would be able to wrap their head around the insane spaghetti that would be needed to do something so "simple".


PROBLEM #2: Subtle problem of probability

So each bitmap can show up with a certain probability; basically I compute a float cellnoise for each bitmap, and compare its float output to a set probability threshold. If below, paint it, if above, don't.

This works great until the user attemts to texture the probability too. He puts some undulating noise function into the "probabilty" input, and wonder why his bitmaps are getting cut off.

Well, they are getting cut off because at the point where the noise undulates above the probability threshold, the bitmap will just stop being painted, and the point of transition of the noise function, totally disregarding you were halfway through the bitmap and have now cut off half of it.

What really needs to happen, is that any randomization for the probability of bitmap N needs to be looked up with a noise function driven by cell of bitmap N, so that the same "randomized" probability value is computed for the entirty of the bitmap.

The only way I could make that work now, is to build "noise probability" into the Randomized Bitmap shader itself. Which bloats it to way too many parameters, and that noise will still not satisfy the users who wants the distribution of Graffiti on a wall to follow some certain distribution....



IF WE HAD HAD my proposed functionality this would have been trivial:

Rather than my code having to have texture lookups, it could just have had ten regular color inputs.

The inputs would then be looked up with something like

    color blah = evaluate(input, "UVW", myUVcoord);

Assume all texturing shader by convention have an "UVW" input, connecting any shaders to the input with an unconnected "UVW" input would have the value I send to the evaluate function fed in. Done!

Same would work for the probabilty, I would just do

   float probability = evaluate(probability, "UVW", cellPosition);



I could think of infinitly more examples.



/Z





Olivier Paquet <olivie...@...>
 

Le mercredi 4 juillet 2018 00:53:50 UTC-4, Master Zap a écrit :
On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.

It's still global parameters. And the whole thing would be limited to globals which is not great if you eventually want to drive some other parameter of the network in a similar way (from the downstream shader).
 
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)
 
There was a thread a few days ago which discussed trying to phase them out. Which is another reason to find a better solution to this problem.

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...
 
I'm not saying it's not useful. It is. It's also a dangerous tool which can be abused to write horribly inefficient or unmaintainable shaders. But that's beside the point. What I am trying to get to is that we should try to come up with a way to do the same thing without being limited to globals. It would certaintly be useful to manipulate other values besides "u,v" in some cases. It would also be nice for that to be explicit in how the shaders are connected (would probably make the optimizer's job easier). Or perhaps it should be a way to call a completely separate shading network, overriding any set of inputs we want to. I don't know quite what it should look like but it's worth trying to see beyond the immediate need. If we really can't come up with anything better then fine. But I still have a bad feeling about it coming back to bite us at some point.

Olivier


Zap Andersson <z...@...>
 

I never said it would be limited to globals... I want to be able to override anything the "evaluated" shader references.

/Z

On Wed, Jul 4, 2018 at 10:15 PM, Olivier Paquet <olivie...@...> wrote:
Le mercredi 4 juillet 2018 00:53:50 UTC-4, Master Zap a écrit :
On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.

It's still global parameters. And the whole thing would be limited to globals which is not great if you eventually want to drive some other parameter of the network in a similar way (from the downstream shader).
 
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)
 
There was a thread a few days ago which discussed trying to phase them out. Which is another reason to find a better solution to this problem.

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...
 
I'm not saying it's not useful. It is. It's also a dangerous tool which can be abused to write horribly inefficient or unmaintainable shaders. But that's beside the point. What I am trying to get to is that we should try to come up with a way to do the same thing without being limited to globals. It would certaintly be useful to manipulate other values besides "u,v" in some cases. It would also be nice for that to be explicit in how the shaders are connected (would probably make the optimizer's job easier). Or perhaps it should be a way to call a completely separate shading network, overriding any set of inputs we want to. I don't know quite what it should look like but it's worth trying to see beyond the immediate need. If we really can't come up with anything better then fine. But I still have a bad feeling about it coming back to bite us at some point.

Olivier

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+unsubscribe@googlegroups.com.
To post to this group, send email to osl...@....
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.



--
--
Håkan "Zap" Andersson - http://twitter.com/MasterZap - the man, the myth, the concept.
--


Larry Gritz <l...@...>
 

Sorry for the delay. Sometimes when people ask about things that are sufficiently deep, it takes me a couple days to gather my thoughts and respond. (Limiting factor: best ideas seem to come in the shower, there are only so many showers I can take daily.)

I hear you guys and understand what you want: a way to take a shader node graph and call it like a subroutine, potentially multiple times with differing parameters.

I can see that there are some neato things you could do with this. The question is how to do it in a clean way that doesn't totally bork all the optimizations we rely on or encourage awful shaders that are hard to understand.

Let's sweep the "modify globals" ugliness out of the way by assuming that this would be done *after* a previously-discussed migration away from "global variables" and toward just using what looks syntactically like shader parameters (which in some cases may be understood to bind to things in the shaderglobals or outputs).

I'm not a fan of "every time you grab a parameter, it magically re-evaluates the upstream network", because it's just a recipe for confusion and wasted computation. I won't even go into the details, but suffice it to say that I can rattle off edge cases where optimizations we depend on would be ruined.

But Zap's idea of some kind of explicit evaluate(paramname, ...) has merit and is possibly growing on me. It's a sharp tool that can easily be used to hurt yourself, but at least it can never truly surprise you -- things can only re-execute if you call it, and it's very visible when you do this. It's still fraught with danger, details that we'd need to ponder, and some limitations we'd want to impose. As examples:

* Presumably it would invalidate the results *all the way up the chain* from the parameter being pulled?

* You would probably only be allowed to set new values of parameters (of the upstream subnet) that were previously marked as `lockgeom=0`, or else they might have been constant-folded away. (Unless, ick, you assume that any subnets that are potentially named by evaluate() would have wholesale drastically less optimization done to them.)

* If results from that multiply-executed subnet are used elsewhere in the network (I mean, one of its outputs is connected to some other input elsewhere, besides the things getting an evaluate() call), then the fact that the evaluation *order* of nodes is nondeterministic means that you won't know which of the output values (from the potentially several times it was called) will end up copied to other places. So maybe it's only safe/predictable to do this if the subnet in question ONLY connects to the node that is doing the evaluate() call. It probably also follows that nodes in a subnet implicated by an evaluate() call would not be able to participate in the "identical node deduplication" optimization that we currently do.


Now, as an aside, I just realized that a lot of your fantasy feature may be partially fulfilled with the "osl.imageio". Are you familiar with that?  Look in src/osl.imageio/oslinput.cpp, the comments explain the gist, but the short version of the story is that it's a DSO/DLL for OIIO that dynamically recomputes image pixels (including texture) by executing OSL code. So you would access it in your shader as a texture, literally texture("...", xcoord, ycoord), and it would be running OSL code behind the scenes. The only parameters that "vary" are the 2D texture lookup coordinates, but you can set other parameters on the subnet by embedding them in the "filename" with a REST-like syntax. And further, since it goes through the texture system, it is able to antialias/filter itself and responds to the usual texture controls (including blur!).

I think that for a straightforward "texture bombing" or "triplanar mapping", you could use your usual bomb/triplanar texture and just use a specially crafted texture filename to trigger OSL code to run to make a procedural pattern that you would be bombing. Maybe that at least partially scratches the itch?

-- lg


On Jul 4, 2018, at 1:59 PM, Zap Andersson <z...@...> wrote:

I never said it would be limited to globals... I want to be able to override anything the "evaluated" shader references.

/Z

On Wed, Jul 4, 2018 at 10:15 PM, Olivier Paquet <olivie...@...> wrote:
Le mercredi 4 juillet 2018 00:53:50 UTC-4, Master Zap a écrit :
On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.

It's still global parameters. And the whole thing would be limited to globals which is not great if you eventually want to drive some other parameter of the network in a similar way (from the downstream shader).
 
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)
 
There was a thread a few days ago which discussed trying to phase them out. Which is another reason to find a better solution to this problem.

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...
 
I'm not saying it's not useful. It is. It's also a dangerous tool which can be abused to write horribly inefficient or unmaintainable shaders. But that's beside the point. What I am trying to get to is that we should try to come up with a way to do the same thing without being limited to globals. It would certaintly be useful to manipulate other values besides "u,v" in some cases. It would also be nice for that to be explicit in how the shaders are connected (would probably make the optimizer's job easier). Or perhaps it should be a way to call a completely separate shading network, overriding any set of inputs we want to. I don't know quite what it should look like but it's worth trying to see beyond the immediate need. If we really can't come up with anything better then fine. But I still have a bad feeling about it coming back to bite us at some point.

Olivier

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+unsubscribe@googlegroups.com.
To post to this group, send email to osl...@....
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.



--
--
Håkan "Zap" Andersson - http://twitter.com/MasterZap - the man, the myth, the concept.
--

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@....
To post to this group, send email to osl...@....
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz





Master Zap <zap.an...@...>
 


I hear you guys and understand what you want: a way to take a shader node graph and call it like a subroutine, potentially multiple times with differing parameters.


In a nutshell, yes. 

 
 
I'm not a fan of "every time you grab a parameter, it magically re-evaluates the upstream network", because it's just a recipe for confusion and wasted computation. I won't even go into the details, but suffice it to say that I can rattle off edge cases where optimizations we depend on would be ruined.


Right, and I'm not sure I ever asked for that. Any "regular" request for an output value would work exactly the way it does today, and would be 100% unaffected by this additoinal feature.

 
But Zap's idea of some kind of explicit evaluate(paramname, ...) has merit and is possibly growing on me. It's a sharp tool that can easily be used to hurt yourself,

It's no worse than message passing, which, ick, should never ever have been in there. Ever. :)
 
but at least it can never truly surprise you -- things can only re-execute if you call it, and it's very visible when you do this. It's still fraught with danger, details that we'd need to ponder, and some limitations we'd want to impose. As examples:

* Presumably it would invalidate the results *all the way up the chain* from the parameter being pulled?

No, not really... only downstream from the variable in the chain you modified.... (or, in the case of those icky globals, anything reading that global)

In a sense, an "eavulate" call of a graph would build a separate version of that optimized graph under the hood, optimized differently for that particular "evaluate" call. 

You modify "P" to re-execute some 3d procedural? 

Only the nodes in that shading graph referencing P needs to be re-run. If you modify an input called "UVW", only input(s) by that name (that are not connected to upstream nodes) would have their values modified, and only code downstream from that point (those poitns) even have to re-run.
 

* You would probably only be allowed to set new values of parameters (of the upstream subnet) that were previously marked as `lockgeom=0`, or else they might have been constant-folded away. (Unless, ick, you assume that any subnets that are potentially named by evaluate() would have wholesale drastically less optimization done to them.)

Ah, but the optimization of the re-execution is an independent problem to the optimization of the graph when used "normally". The optimizer would need to think of this as it's own thing, effectively building a parallell "clone" of the graph under different optization constraints.

But these constraints are well defined; we know exactly what is being modified, we can tell by the parameter list of "evaluate" even....!

 
* If results from that multiply-executed subnet are used elsewhere in the network (I mean, one of its outputs is connected to some other input elsewhere, besides the things getting an evaluate() call), then the fact that the evaluation *order* of nodes is nondeterministic means that you won't know which of the output values (from the potentially several times it was called) will end up copied to other places. So maybe it's only safe/predictable to do this if the subnet in question ONLY connects to the node that is doing the evaluate() call. It probably also follows that nodes in a subnet implicated by an evaluate() call would not be able to participate in the "identical node deduplication" optimization that we currently do.

No no, this feature cannot affect regular evaluation in any way.... it has to be sideffect free. Any caching of numbers and values have to be done separately for the "regular" call of the graph vs. each "evaluate" call of the graph.
 
Now, as an aside, I just realized that a lot of your fantasy feature may be partially fulfilled with the "osl.imageio". Are you familiar with that?  

I am, and it may be a neat toy, but remember that my constraint is making the same OSL code run everywhere.... which is already today a big problem with imageio being configured differently on different OSL-capable renderers..... 

(Of course, building new features into OSL would cause the same problem, I admit, and herding the cats of the renderers to align on an OSL version, well... :) )

The osl.imagio trick is fun, but limited to 2d use cases effectively in the unit square...

...and while that might solve the trivial case of a "bitmap+colorcorrect" being an input to Randomized Bitmap, it won't solve ANY of the more interesting cases, like plugging in a 3d noise field into a fake raymarching shader to fake volumetric effects, or even the quite real world (for users) problem of mapping the "probability" parameter of Randomized Bitmap to something under the users control (as described in my previous message).



/Z

P.S.  I even poindered building a "shading graph to bitmap" shader into 3ds Max itself, which would effectively 2d-bake any shade tree (OSL or not) and appear to the downstream side of the shade tree as if it is a regular bitmap. Similar to the osl.imagio "trick" but done wholly on the app side. 


Master Zap <zap.an...@...>
 

Lets make an even real-worlderer example. Here's my up-to-4-dimensional mandelbrot shader

// A simple Mandelbrot set generator shader

// mandelbrot.osl by Zap Andersson

// Modified: 2018-02-08

// Copyright 2018 Autodesk Inc, All rights reserved. This file is licensed under Apache 2.0 license

// https://github.com/ADN-DevTech/3dsMax-OSL-Shaders/blob/master/LICENSE.txt


shader Mandelbrot

    [[ string help = "A four dimensional mandelbrot/julia set generator" ]]

(

    vector UVW = vector(u,v,0)

        [[ string help = "The coordinate to look up. Defaults to the standard UV channel" ]],

    vector Center = 0,

    float Scale = 0.35,

    float ZImaginary = 0.0,

    int Iterations = 100,

    float ColorScale = 1.0,

    float ColorPower = 1.0,

    output color Col = 0,

    output float Fac = 0.0,

)

{

    vector pnt = (UVW - point(0.5,0.5,0)) / Scale - (Center + point(0,0.66,0));

    float cR = pnt[0];

    float cI = pnt[1];

    float zR = pnt[2];

    float zI = ZImaginary / Scale;

    int num = 0;

    for (num = 0; num < Iterations; num++)

    {

        float zR2 = zR * zR; // Real squared

        float zI2 = zI * zI; // Imag. squared

        if (zR2+zI2 > 4.0)

            break; // Escapes to infinity

        zI = 2 * zR * zI + cR;

        zR = zR2 - zI2 + cI;

    }


    Fac = (float)(num * ColorScale)/ (float)Iterations;

    Col = wavelength_color(420 + pow(Fac, ColorPower) * 2000);

}



I was able to render this volumetrically into this fancy movie:

https://www.youtube.com/watch?v=dNX4yhW3CJ0


But that thing was tediously rendered in Arnold w. actual volumetric shading. 


I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:




void mandelbrot

(

    vector UVW,

    vector Center,

    float Scale,

    float ZImaginary,

    int Iterations,

    float ColorScale,

    float ColorPower,

    output color Col ,

    output float Fac,

)

{

    vector pnt = (UVW - point(0.5,0.5,0)) / Scale - (Center + point(0,0.66,0));

    float cR = pnt[0];

    float cI = pnt[1];

    float zR = pnt[2];

    float zI = ZImaginary / Scale;

    int num = 0;

    for (num = 0; num < Iterations; num++)

    {

        float zR2 = zR * zR; // Real squared

        float zI2 = zI * zI; // Imag. squared

        if (zR2+zI2 > 4.0)

            break; // Escapes to infinity

        zI = 2 * zR * zI + cR;

        zR = zR2 - zI2 + cI;

    }


    Fac = (float)(num * ColorScale)/ (float)Iterations;

    Col = wavelength_color(420 + pow(Fac, ColorPower) * 2000);

}



shader Mandelbrot

    [[ string help = "A four dimensional mandelbrot/julia set generator" ]]

(

    float start = 0.0,

    float end = 100.0,

    int steps = 10,

    vector UVW = vector(u,v,0)

        [[ string help = "The coordinate to look up. Defaults to the standard UV channel" ]],

    vector Center = 0,

    float Scale = 0.35,

    float ZImaginary = 0.0,

    int Iterations = 100,

    float ColorScale = 1.0,

    float ColorPower = 1.0,

    output color Col = 0,

    output float Fac = 0.0,

)

{

    float fac = 0.0;

    color col = 0.0;

    float delta = (end - start) / steps;

    for (int i = 0; i < steps; i++)

    {

        point pt = P + I * (start + delta * (i + noise("uperlin", P*10000)));

        float z = pt[2];

        pt[2] = 0.0;

        mandelbrot(pt, Center, Scale, z, Iterations, ColorScale, ColorPower, col, fac);

        Fac += fac;

        Col += col;

    }

    Col /= steps;

    Fac /= steps;

}



I had to rewrite the "shader" Mandelbrot to the "sub-function" mandelbrot, and then call this N times from my main shader instead, effectivly forcing me to build my 3d texture INTO my ray marcher....  That's silly, I shouldn't have had to do that!


Had there been an "evaluate" function, I would just have plugged my regular Mandelbrot into my raymarcher and, as they say, Bob would have been my Fathers Brother.


It would have *worked* exactly the same. The final optimized backend shading code would probably be identical to this case... but it would have been much more useful and flexible to the user, and much easier on the shader developer....



/Z




Olivier Paquet <olivie...@...>
 

Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:


Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Olivier


Master Zap <zap.an...@...>
 

Uhm.. you are arguing my point for me :)

/Z


On Thursday, July 5, 2018 at 8:18:39 PM UTC+2, Olivier Paquet wrote:
Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:


Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Olivier


Paolo Berto <pbe...@...>
 

+1

On Fri, Jul 6, 2018 at 3:18 AM Olivier Paquet <olivie...@...> wrote:
Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:


Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Olivier

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@....
To post to this group, send email to osl...@....
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.


--
paolo berto durante
j cube inc. tokyo, japan
http://j-cube.jp


Moritz Mœller (The Ritz) <virtu...@...>
 

On July 5, 2018 20:18:40 Olivier Paquet <olivi...@...> wrote:


Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.

But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:

Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Indeed. +1.
Don't get me wrong, I'm seeing lots of good reasons to add some evaluate() like call to OSL.
But I agree that writing a ray marcher inside a shader is not one of them. ;)

Cheers,

.mm


Master Zap <zap.an...@...>
 

OMG.... stop taking my illustrative samples as the point... they are not the point.... the feature is the point... I'm just trying to come up with a quick case of illustrating the point !!!

/Z


On Friday, July 6, 2018 at 1:50:39 AM UTC+2, Moritz wrote:
On July 5, 2018 20:18:40 Olivier Paquet <oli...@...> wrote:


Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.

But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:

Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Indeed. +1.
Don't get me wrong, I'm seeing lots of good reasons to add some evaluate() like call to OSL.
But I agree that writing a ray marcher inside a shader is not one of them. ;)

Cheers,

.mm


Larry Gritz <l...@...>
 

Point is made. There are many cool applications to this feature. It's a good suggestion, now we just need to figure out the best way to express it in the language. Language changes are forever, so we don't rush the implementation of new ones before we've really had a while to mull it over.

-- lg


On Jul 6, 2018, at 2:10 PM, Master Zap <zap.an...@...> wrote:

OMG.... stop taking my illustrative samples as the point... they are not the point.... the feature is the point... I'm just trying to come up with a quick case of illustrating the point !!!

/Z

On Friday, July 6, 2018 at 1:50:39 AM UTC+2, Moritz wrote:
On July 5, 2018 20:18:40 Olivier Paquet <oli...@...> wrote:


Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.

But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:

Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Indeed. +1.
Don't get me wrong, I'm seeing lots of good reasons to add some evaluate() like call to OSL.
But I agree that writing a ray marcher inside a shader is not one of them. ;)

Cheers,

.mm


--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@....
To post to this group, send email to osl...@....
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz





Changsoo Eun <chang...@...>
 

I really like where this conversation is going as an "artist".


On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:
Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z


Changsoo Eun <chang...@...>
 

Any update?


On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:
Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z