Hacker News new | past | comments | ask | show | jobs | submit | more jpfr's comments login

Modelica underneath uses DAE (differential algebraic equations) [1,2].

Think of it like a solver for many coupled differential equations. The coupling happens through "linear" equality constraints. Such as "the output pressure variable of component A needs to be equal the input pressure variable of component B".

Something that Modelica doesn't do very well is stochastic systems. There you would need to go into SDE and that brings a lot of technicalities.

[1] Petzold, Linda R. Description of DASSL: a differential/algebraic system solver. No. SAND-82-8637; Sandia National Labs, 1982.

[2] Kunkel, Peter. Differential-algebraic equations: analysis and numerical solution. European Mathematical Society, 2006.


Any chance you know of good DAE books/resources that go into combining symbolics and numerics or parametrized DAEs?


There is an MIT course by the developers of MTK: https://sciml.github.io/ModelingToolkitCourse


Yes, this can be circumvented. But the optics are important.

Imagine a backdoor planted by a Russian asset. Linux could get removed from some list of approved OS that can be used in a government context.


> Imagine a backdoor planted by a Russian asset.

Email-based filtering of maintainers is not even close to what could be considered adequate security measures. In fact, when CISO or OSS starts caring about the optics, it’s a red flag.


Umm. There is a deep integration between OPC UA and MQTT since ca. 2017.

For binary and JSON payloads, and with encryption and group key management features.

I see quite a bit of unsubstantiated bashing lately. Especially advocating 20% solutions that solve the easy parts with less effort.

What are you missing from OPC UA?


This is a big achievement after many years of work!

Here are a few links to see how the work is done behind the scenes. Sadly arstechnica has only funny links and doesn't provide the actual source (why LinkedIn?).

Most of the work was done by Thomas Gleixner and team. He founded Linutronix, now (I believe) owned by Intel.

Pull request for the last printk bits: https://marc.info/?l=linux-kernel&m=172623896125062&w=2

Pull request for PREEMPT_RT in the kernel config: https://marc.info/?l=linux-kernel&m=172679265718247&w=2

This is the log of the RT patches on top of kernel v6.11.

https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-...

I think there are still a few things you need on top of a vanilla kernel. For example the new printk infrastructure still needs to be adopted by the actual drivers (UART consoles and so on). But the size of the RT patchset is already much much smaller than before. And being configurable out-of-the-box is of course a big sign of confidence by Linus.

Congrats to the team!


Thomas Gleixner is one if the most prolific people I've heard of. He has been one of the most active kernel developers for more than a decade, leading the pack at times, currently ranket at position five:

https://lwn.net/Articles/956765/


TIL in 2022, Linutronix became an "independent subsidiary" of Intel, indeed:

https://www.linutronix.de/company/history.php


Universities, cities, regions and countries do have PR departments.

Arguably the public image of a university is its most important asset. Because that attracts the best students, researchers and third party funding (public and private).


You assume that the salary is what it costs for the company to have an employee. You need to at least double that.

There is overhead for the person itself (employer subsidized healthcare, office space, equipment) and there is overhead within the organization. Like a secretary and accounting departments who cannot be billed to a client. And management layers of course…

Most likely the 80k are enough to cover one person-year for a consulting agency in Eastern Europe.

That said, the new logo is atrocious.


Oh without a doubt I was underestimating how much a single employee costs a company (and therefore their clients too).

But I believe my point stands regardless—80K is very little money indeed from a group perspective.


They called it p-code at the time. The purpose was (purported) to simplify the porting between architectures.

https://casadevall.pro/articles/2020/11/compiling-word-for-w...


from http://www.trs-80.org/multiplan/

    "Originally code-named “EP” (for “Electronic Paper”), Multiplan was written to use a very clever p-code compiler system created by Richard Brodie. This p-code system, which was also used later by Microsoft Word, allowed Microsoft to target Multiplan to a huge number of different 8-bit and 16-bit computer systems. (Charles Simonyi once estimated that Multiplan was available for 100 platforms.)"
Many people say that the 'p' in 'p-code' stands for 'pseudo', i.e. pseudo code. But the article archived at https://techshelps.github.io/MSDN/BACKGRND/html/msdn_c7pcode... says the 'p' is short for 'packed'.

    "Microsoft has introduced a code compression technology in its C/C++ Development System for Windows version 7.0 (C/C++ 7.0) called p-code (short for packed code) that provides programmers with a flexible and easy-to-implement solution for minimizing an application's memory requirements. In most cases, p-code can reduce the size of an executable file by about 40 percent. For example, the Windows Project Manager version 1.0 (resource files not included) shrinks from 932K (C/C ++ 7.0 with size optimizations turned on) to 556K when p-code is employed."

    "Until now, p-code has been a proprietary technology developed by the applications group at Microsoft and used on a variety of internal projects. The retail releases of Microsoft Excel, Word, PowerPoint�, and other applications employ this technology to provide extensive breadth of functionality without consuming inordinate amounts of memory."


I agree! The ArtHoles podcast is superb.

There were some updates on his Instagram lately. Fingers crossed for more episodes.

This is the first online-first content producer I'd consider paying real money for...


This article is about “solving” differential equations and not convex optimization.


> This article is about “solving” differential equations and not convex optimization.

This article is about solving nonlinear equations (not differential equations, not sure where you got that from). All NLP optimizers can solve nonlinear equations — it’s a special case where the objective is constant.

Ipopt is not a convex solver so am not sure what convex optimization you are referring to. It is a general nonlinear solver, which covers nonconvex problems as well (I worked on nonconvex nonlinear programs for a decade and it was my primary solver)

Also all nonlinear equation systems are nonconvex. (A convex program requires equality constraints to be linear)


> all nonlinear equation systems are nonconvex

Maybe you have something more particular in mind when you say "systems", but not all nonlinear functions are non-convex. Least squares, for example, is nonlinear and convex.

Also note that IPOPT, while wonderful, is a local solver. It may not be limited to convex problems, but those are the only ones it's guaranteed to solve to optimality.


I was talking in terms of convex optimization. The criteria for convex optimization is convex objective, convex inequality constraints and convex feasible region, and linear (not just convex) equality constraints.

I’m aware that ipopt is a local solver.


> all nonlinear equation systems are nonconvex

> I was talking in terms of convex optimization

I'm not sure what subtlety you're pointing to here. As far as non-convex problems, though, my point is that IPOPT isn't special in this regard. Any convex solver can be a non-convex solver if you don't care about global optimality.


> Any convex solver can be a non-convex solver if you don't care about global optimality.

Aside: Structurally I’m not sure how this would be true.

Convex solvers have very specific forms. For instance a QP solver requires a very particular form and does not admit any arbitrary non convex form except for one: the non-PSD Hessian which is the concave problem.

My point is that all NLEs power inside an optimization problem gives rise to a non convex optimization problem with no guarantee of a global solution. So convex optimization is not applicable here.


The feasible region {x | f(x) = 0} is nonconvex no matter whether f is convex.


Consider claim:

> The feasible region {x | f(x) = 0} is nonconvex no matter whether f is convex.

For some positive integer n and for the set of real numbers R, consider closed (in the usual topology for R^n), convex set A a subset of R^n. Define function f: R^n --> R so that f(x) = 0 for all x in A, f(x) > 0 for all x not in A, and for all x in R^n f infinitely differentiable at x. It is a theorem that such an f exists.

Then { x | f(x) = 0 } = A and is both closed and convex in contradiction to the claim.


I'm assuming you're referring to nonlinear f(x) because this statement is trivially false for linear f(x).

But consider the function f(x) = max(0, g(x) - c) where the following holds:

* g(x) is nonlinear, positive definite in x, and convex.

* c > 0

Then f(x) is nonlinear and convex (it's the pointwise maximum of convex functions), and the set {x : f(x) = 0} is a convex set.


And this is why it is a bad idea for open source contributors to transfer their copyright.

If hundreds of commits are baked into a software - under an open source license but without the full copyright transferred to a central legal entity - then it becomes impossible to change the license post-hoc.


That may be true if a codebase is licensed under the GPL and has a diverse copyright ownership. But the 3 clause BSD is not that.

3 clause BSD gives everyone permission to use it in new works that are made available using license terms of one’s own choosing, so long as the obligations of those 3 clauses continue to be met.


> 3 clause BSD gives everyone permission to use it in new works that are made available using license terms of one’s own choosing, so long as the obligations of those 3 clauses continue to be met.

But what I get from this is: The project switched away from 3 clause BSD to something that is less permissive.


But the less permissive licence effectively only applies to new modifications to the (contributed) code, which is allowed by BSD, but not by GPL.


The 3 clause BSD gives all the permissions that are needed for someone to add restrictions via their own license terms.

Licenses like the GPL come with an obligation that one not add restrictions when passing the software on to others.


So how did RedHat add restrictions on gpl code base of CentOS?


RedHat says that you can't get future versions if you exercise your GPL freedoms. You are free to redistribute the latest version of RHEL or CentOS or whatever, including all source code for all packages in their repos. But they will never give you another version of any of their software if you do so.


When will developers learn that the BSD does not protect you or your users? I understand the philosophical reasons some folks like BSD/MIT-style licenses, but at the end of the day they are not much more than public domain: anyone can take someone’s work and contributions, make improvements and keep the entire thing — original work and contributions, as well as improvements — proprietary.

If you care about a software commons, if you care about benefiting from the improvements others make to your own software, if you care about your users benefiting from the improvements others make: use a copyleft license!


This is true for copyleft licenses, but not for permissive licenses. And you can still get a copy of the old version under the old license from someone who downloaded it before the license change.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: