Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm interested about your project #2. As you mentioned B-splines do you deal with trimmed surfaces? Would you have any reading recommendations for someone learning about surface optimisation?


Hey, thanks for your interest! I've avoided trimmed surfaces, in part because I'm interested in doing one or another kinds of analysis on or with the parametric geometry, and trimmed surfaces are not so easy to work with for some of the finer control I want from my optimization tools. (They often cause comparability issues with export between programs as well, but that becomes more important only if somebody uses your stuff ;)

I like other methods of getting local control, or finer shape control of surfaces. In my stuff I've used truncated hierarchical B-splines (THB-splines), which are great for adding detail, but useless for changing topology. People speak highly of (analysis suitable) t-splines but I say they are complicated and subdivision may be better overall now anyway. Generally speaking, I think the whole industry will have to go to subdivision. (Among friends I'd say it may carry right down to poly meshes via differential geometry but those two representations might play well together given the right tools)

Reading recommendations:

For everything you ever wanted to know about a B-spline, including a C++ library implementation from scratch, highly documented and explained: 1.) Piegl and Tiller "The NURBS Book" This includes a tiny bit of shape control via optimization.

For an explanation of the basics of B-spline nonlinear optimization with Lagrange multipliers, focuses on ships, there is a chapter here that takes you to the state of the art, circa 1995: 2.) Nowacki, et al., Computational Geometry for Ships

3.) Tony De-Rose's book "Wavelets for Computer Graphics" actually has some good scripts getting at the basics of wavelet B-splines and some facets of hierarchical parametric geometry.

The above is a start at form parameter design for B-splines. This was okay 20 years ago. It's still importatnt as a basis for understanding optimization of parameterized shape. ---Even subdivision surfaces have control points.

Generally B-splines were found not to be flexible enough for representing local details efficiently. Further, the optimization techniques still require a lot of manual setup to get things right...

The next steps are still in development: -subdivision surfaces are a way forward for shape representation. Generally they were more problematic for computing engineering quantities of interest, especially and precisely where they "go beyond" the B-spline to allow surfaces of greater flexibility -- that is where the analysis suitability breaks down to some extent. Again, this has been patched up in the last couple of decades but still change is slow to come to the engineering industry.

I think it's well worthwhile to look at geometric optimization in computer graphics as well. See The cal-tech multi-res group, Keenan Crane at CMU (geometry collective), and tons of siggraph papers where discrete differential geometry has been leveraged to do neat things with shape. (E.g. curvature flow: https://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFl... I think there is newer work building off this and adding more complicated constraints but I can't remember off hand. As is they have some already!)

Back to the point: you wanted optimization readings. Well it's mostly in the literature, and the literature is mostly kind of vague when it comes to parametric optimization of B-spline. Though the high points are mentioned, the detail is often hardly much better than you find in Nowacki, 1995. To this end, I have some really specific entry level PDFs that might help, and the first part of my stuff is written up in this paper: https://www.sciencedirect.com/science/article/abs/pii/S01678... This deals mostly with curves, but has a direct extension to surfaces. Automatic differentiation really helps here! (I never published this bit on the extension to do surfaces directly (with all their attendant properties as potential constraints) as my professor said "direct surface optimization was to expensive". Looking at the discrete differential papers as of late, I tend to disagree. )


I keep coming back to bother you :). One of the newer tricks to make parameter fitting less expensive which has recently been developed is active subspaces. I thought you might be interested in playing around with it.

Most of the research is being done out at the Colorado school of mines by Paul Constantine. The basic idea is that you reduce your parameter space to the eigenvectors of the sensitivity matrix with the largest eigenvalues. Some of the work I have seen in constitutive modeling (and UQ) has effectively reduced parameters spaces of a couple hundred DOF to about 5-6.


Scanning through some literature, does this method require that the input space be equipped with a probability density function “quantifying the variability of the inputs”?

Seems like that would be the (or a function of the) thing we are after in sensitivity analysis.

On the other hand, it appears that I may be able to get away with some naive assumption about this quantity, compute eigenvectors and find the active subspace... and then vary the mode in these directions.

Is this for local or global optimization?

Part of my stuff was about finding a way to guarantee that a particular set of inputs results in a feasible design. (Edit: maybe active subspace could replace this... or exclude poor regions faster)

The other part (the gradient driven part) solves curves and surfaces for shape which conforms to constraints. We really need the fine details to match as the constraints are often of equality type.

From there, it seems this active subspace method could really help in searching the design space. (From what I read, this is the purpose) A more efficient method of response surface design. My stuff is agnostic about this.

Then again, surely it could be of used in more efficiently optimizing constrained curves and surfaces... I will keep thinking but it seems a secondary use at best, or would you agree?


We should move this conversation to email (as I will check that more frequently and will be more likely to get back to it). See email in my profile.

Active subspace comes from the uncertainty quantification community. If you assume all your parameters are Gaussian, then the sensitivity matrix is directly correlated to the probability density functions. I find it easier to think in terms of the sensitivity matrix, but useful to realize the sensitivity matrix to approximate (complex) probability distributions.

My though was that if you were optimizing have a huge parameter space theta = [theta1, ... thetam] then you could reduce the parameter space by only looking at theta_reduce = [theta | d loss/d theta > threshold] or you could look at active subspaces and change the parameter space to xi = [xi1, ... xi_m] where x_i = SUM a_j theta_j.

xi_i could be given by the largest eigenvectors of the sensitivity matrix S_ij = d^2 loss/dtheta_i dtheta_j

Wouldn't it be nice if hacker news supported latex.

I haven't done any work here, but I suspect I will be doing some of this towards the end of summer.


Hey this is cool! I did not see your comment until now. Let me take a look (as soon as I can) and I will see what I can come back to you with.

Yeah Colorado School of Mines! Small world, I am in the metro area. I've actually talked with a physics proff from there about helping with a project.


What library are you using for automatic differentiation. I am working on building code to optimize (and later build) high quality finite element meshes for structural analysis. For the initial proof of concept, I am simply doing finite differences, but would prefer to eventually add AD. I am unsure which packages are suitable (currently all numpy and scipy).


Both in the python version and so far in c++, I am using my own forward mode implementation in Numpy and Eigen, respectively. (Why? Well, it was easy, I wanted to learn, it’s been fast enough, and most critically, allowed me to extend it by using interval valued numbers underneath the AD variables) Here’s where I do something kind of funny In the AD implementation: Basically just write a class that overloads all the basic math ops with a structure containing the computations of the value, the gradient, and the hessian. The trick, if there is any, is to have the basic AD variables store gradient vectors with a “1” in a unique spot for each separate variable. (And a zero elsewhere). Hessians of these essential variables are zero matrices. Mathematical combinations of the AD variables automatically accrue the gradient and hessian of ...whatever the expression is. Lagrange multipliers are AD variables which extend the size of your gradient. Oh, and each “point” in, say 3D, is actually 3 variables so your space (and size of gradient) is 3N + number is constraints in size. Write a newton solver and you are off and running.

This would be pretty hefty (Expensive) for a mesh. I’ve used it successfully for splines where a smaller set of control points controls a surface. Mesh direct sounds expensive to me. I assume you looked at constrained mesh smoothers? (E.g. old stuff like transfinite interpolation, Laplacian smoothing, etc?). Maybe newer stuff in discrete differential geometry can extend some of those old capabilities? What is the state of the art? I have a general impression the field “went another way” but not sure what that way is.

As for the auto diff, I’ve also got a version that does reverse mode via expression trees, but the fwd mode has been fast enough so far and is very simple. Nice thing here is that overloading can be used to construct the expression tree.

Of course if you do only gradient optimization you may not need the hessian. It’s there for Newton’s method.


Thanks! I am pretty sure nobody does direct optimization on the mesh quality because it is hefty. I did come across a PhD thesis which was doing it for fluid structures interactions and his conclusion was it was inferior to other techniques. I have a few tricks which will hopefully make the problem more tractable.

I use FEMAP at my day job have found Laplacian smoothing and FEMAPs other built in tools have been wanting.

I am currently thinking that my goal is to try and use reinforcement learning to build high quality meshes. In order to do that you need a loss function and if you are building a loss function you might as well wrap an optimizer around it.


Huh, machine learning for high quality meshing sounds like a great idea! (RL sounds like turning this idea up to 11 — exciting stuff and best of luck!)

FEMAP Seems a hot topic these days. Some folks at my work are building an interface to it for meshing purposes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: