The parent is saying that it is in professional games, but that that doesn't mean it requires legislation making cheating in all games explicitly illegal. Penalties in sports are commonly enforced primarily through the organisations running the sport, and only some offenses even involve actual law. If you cause someone financial harm, it's very likely already illegal.
Match-fixing in professional games is clearly fraud, that doesn't mean that letting your friend win because it is his birthday should be prosecutable.
I'd personally be okay with making cheating in multiplayer games illegal (it's purely antisocial behavior that causes real financial harm and has no legitimate reason to exist), but prosecuting people who write code is beyond the pale.
Cheating in physical sport is not fraud unless done with the intent to fix an outcome. Similarly, cheating is not always necessary to fix an outcome, but the penalty is the same.
We have a toy LLVM attempt that compiled a few trivial loops but hooked into the unstable C++ LLVM API. Nobody really knew how to maintain it, and we got tired chasing after the unstable API, so we let it bitrot to death.
I don't expect any other JIT backend (libgccjit could be another possibility) would require any less knowledge of compilers and language design. Does Graal have some magic for us? Can it give us a JIT compiler even if most of us are not compiler writers or language designers?
Forget “this language is fast”, “this language has the libraries I need”, and “this language has the tool support I need”. The Truffle framework for implementing managed languages in Java gives you native performance, multi-language integration with all other Truffle languages, and tool support - all of that by just implementing an abstract syntax tree (AST) interpreter in Java.
Truffle applies AST specialization during interpretation, which enables partial evaluation to create highly optimized native code without the need to write a compiler specifically for a language. The Java VM contributes high-performance garbage collection, threads, and parallelism support.
So you won't need the know-how to write a compiler, but instead write an interpreter optimized for execution with Graal.
Whats the distinction between Integer Programming and Constraint Programming? From a cutlery glance they appear to be the same solution to the same problem...
Integer programming is effectively searching the edge of a convex polygon (polytope for higher dimensions) for an optimal value function, defined in terms of the coordinates of the points in space. The inequalities are planes that subdivide the solution space into permitted and non-permitted domains.
Constraint programming generates lots of potential solutions (combinatorially) and prunes the search tree to make the large numbers tractable.
The intuitions behind the two techniques are quite different.
> Integer programming is effectively searching the edge of a convex polygon...
Great definition of linear programming. In a sense the thing that makes integer programming hard is that the feasible region is not convex -- for two feasible solutions x and y, ax + (1-a)y for 0<=a<=1 is not necessarily feasible.
I'd also say that integer programming is a kind of constraint-based programming extended with the addition of an objective function -- we're not just looking for any satisfactory answer, but an answer with a cost or value that cannot be improved upon. You're definitely right that "things called constraint-based programs" tend to be solved in different ways, though (and the languages they're expressed in tend to be nicer, too.)
I think your definition of Integer programming is not correct. Integer programming or Mixed Integer Programming assumes that the solution has the all or some variables, integers. Most of the Integer programming models are usually using binary variable to indicate decisions.
Further, I think you tried to describe Linear programming in the beginning of your post. But Linear programming searches the vertices of the polytope and not the edges.
The biggest issue with Integer programming is the lack of convexity of the solution space when compared to Linear programming. So the approach for solving them is very different compared to Linear programming. If I made any mistakes in my post, I welcome to be corrected.
Constraint programming can be viewed as looking for feasible solutions to optimization problems.
So, what is optimization? For positive integers m and n, the real numbers R, and functions
f: R^n --> R
g: R^n --> R^m
find x in R^n to (problem MP1)
maximize f(x)
subject to
g(x) <= 0
where g(x) is regarded as an m x 1 matrix
and the 0 is also m x 1 but with all its components 0.
The constraints are the
g(x) <= 0
We say that we have m constraints.
The subject is essentially the same
using
g(x) >= 0
or
g(x) = 0
instead or minimizing instead of
maximizing.
Point x in R^n is feasible provided
it satisfies the constraints, that is,
g(x) <= 0
The feasible region is the set
F of all feasible x.
A point x is optimal if it is
feasible and makes f(x) as large
as possible. That is, x is
optimal provided for any feasible
y we have f(x) >= f(y).
If one or more of the functions
f or g is non-linear, then we
have non-linear programming.
Here we may use the Kuhn-Tucker
conditions to seek a point
that satisfies necessary conditions
for optimality. For an algorithm,
we may take gradients of the
constraints to set up a linear
program, solve that, do a line search,
and try again.
If functions f and g are linear, easily written using matrix algebra, then the optimization problem is linear programming (here programming is used in the British sense of operational planning).
Function f is convex provided for
any x, y in R^n and any t in [0, 1]
we have
t f(x) + (1 - t) f(y) >= f( t x + (1 - t) y )
That is, in a graph of f, a straight line
between (x, f(x) and (y, f(y) ) is on or
over the graph and never under the
graph. So, the straight line over
approximates the function.
If f is convex, then it is continuous (proof is not very difficult but is in Fleming, Functions of Several Variables). IIRC convex implies differentiable almost everywhere (maybe proof in R. T. Rockafellar, Convex Analysis). The epigraph is the graph of f and everything above it, and it is a convex set so has supporting hyperplanes and is the intersection of all the closed half spaces from its supporting (tangent) hyperplanes (IIRC, proof in Fleming). Such a
supporting hyperplane is called a
subgradient and can be quite useful
in optimization, e.g., Lagrangian
relaxation. There is a closely related,
really nice result in Hilbert space: Every
non-empty, closed convex subset has a unique
element of minimum norm. That can
be very nice to know!
To be more clear, a convex function, while
continuous, need not be differentiable.
Why not? Intuitively, at a point, the
function makes a sharp turn. Similarly, a cube is a convex set but has 10 sharp
edges and 8 sharp points that have supporting (tangent) planes, but these planes are not unique. So, a subgradient at a point would be a gradient and a
tangent plane if at that point it was
the only tangent plane. So, intuitively,
a subgradient is nearly a tangent
plane.
Function f is concave provided
that -f is convex. If f is convex
and -f is convex, then f is linear --
so, intuitively, a function that
is convex (or concave)
is half of being linear.
We have some astoundingly
powerful methods for solving
linear programming problems.
A significant fraction of the
Nobel prizes in economics
have been from economic
models based on linear
programming.
Some special cases, e.g.,
least cost network flows,
maximum matchings,
are special cases with
even better results for the
algorithms.
If in MP1 function f is concave and
funtion g is
linear, then again we have some
good methods.
If in MP1 function f is quadratic
and concave and function g is
linear, then we have the problem
of Markowitz in investment
portfolio management and his
Nobel prize in economics.
If we are maximizing a
convex function subject to
linear constraints, then we
have, IIRC, a problem in NP-complete.
If in linear programming
we require in addition that some or all of the components
of x are required to be integers,
then we have some integer
constraints and a problem in
integer linear programming (ILP).
ILP was one of the earliest
problems shown to be in
NP-complete.
So, that's a nutshell version of optimization.
Can also extend to accommodating randomness and
doing much the same over time.
The over time part is control theory,
e.g., how to fly an airplane from one point
to another in least time -- there are books, e.g., by Athans and Falb. Mike Athans was long at MIT and Falb, at the Brown Division of Applied Mathematics. Control theory
had a flurry of interest in the space
program. There is an extension, differential game theory, which
is like how best for a missile to
chase an airplane where the airplane
is best trying to avoid the missile.
Typically in control theory and
differential game theory we look
for necessary conditions for
optimality which usually turn out
to be in terms of Lagrange
multipliers.
Optimization over
time with randomness is essentially
stochastic optimal control, and the
main technique is
a version of dynamic programming.
An early proponent was Richard
Bellman.
The Black-Scholes Nobel prize work in
option pricing is a relatively
simple case of stochastic optimal
control.
So, in terms of MP1, constraint programming is, say,
find x in R^n so that
g(x) <= 0
that is, just find a point in the feasible
region R.
An example might be, yesterday
killed 5000 hogs and hung and chilled
the results. Today cut the hogs
and put the pieces in boxes
in 18 wheel trucks, about 40,000
pounds of load per truck, for
delivery to customers about 900
miles away.
Have only three bays in the loading
dock. So, want to know the order
in which to load the trucks to
get them all loaded in time.
So, have a constraint satisfacation
problem and not really an optimization
problem.
Fundamentally, i.e., in the sense
of computational complexity,
finding a feasible
point is as difficult as finding
an optimal point.
So, really, fundamentally,
constraint programming
is no easier than optimization.
IIRC at one time SAP and ILOG
were interested in CPLEX as
means of solving constraint
satisfaction problems. So,
to solve a constraint satisfaction
problem, start with some
optimization software.
If it loads quickly, why not? My connection is a sub 3ms ping to the nearby datacenters, what exactly is the bottleneck?
It's not to say there isn't a place for SPAs but I would argue many of these would be better suited to being full fledged desktop apps instead (better performance and snative integration).
Okay, for you there's no bottleneck, just woeful UI/UX. Seriously, seeing just one stateful element at a time, then pressing "Next". Wow, just wow.
I fully expect that when I type the first few digits of my credit card number that the site knows precisely which card provider I'm supplying and updates the UI (highlight the card type) accordingly and immediately update the security code field to request my 3-digit MasterCard CCV vs. a 4-digit AMEX CID.
To be clear, I'm not suggesting code needs to run on a client. One day when we all have fast enough connections (with minimal latency), including on mobile network, then perhaps all the code can run server-side and we just be streamed the output (video) - which again, updates in real-time!
However, living in the real world as it stands today, client-side scripting is a fantastic solution to a real-world problem i.e. latency and bandwidth constraints, and real-time responsive UIs.
Docstrings are for modules, classes, functions and methods, not variables mixed with the code.
And "optional" is a misnomer unless you only ever touch your own code. Programming is generally a social activity, so for most of us, "optional" means just means "inconsistently required".
I don't have a philosophical objection to typing, but I'm yet to see a syntax that won't muddy Python's.
> And "optional" is a misnomer unless you only ever touch your own code. Programming is generally a social activity, so for most of us, "optional" means just means "inconsistently required".
Optionally means that it's not required or mandatory.
You don't need to use it if you don't want to. If you're working on a project and the project follows a style guide then you follow the style guide, but nothing forces your bosses to require it. The code will run anyway.
It isn't good or bad. It's an optional feature. No one is forced to use it unless they specifically want it.
Your bosses aren't forced to use it. This means that if they don't want to use them and they don't believe the official style guide has to impose them, you don't have to use them.
If your bosses decide to impose them and change teh official style guide to require type annotation then your personal taste doesn't have any say in the subject either way, and you only need to act like a professional and be a team player.
But the key issue is that no one is forced to use optional features unless they really want to.
It does seem odd that a module could be partially static. If it was optional I'd prefer to see it enabled by a different filetype (e.g. pys, heh piss).