Hacker Newsnew | past | comments | ask | show | jobs | submit | yoloClin's commentslogin

Broken access control is things like direct object vulnerabilities and authorisation bypasses _as well_ as broken authentication controls.

I'm not saying you're wrong, and agree that security should never be a 'premium' product, but it's important to identify that it isn't _just_ limited to authentication.

That being said, messing with SAML/Oauth assertions is generally pretty fruitful when pentesting, and MFA is something I'd recommend in almost all public facing applications.


Friendly reminder that this is about frequency, as opposed to severity. 'Cryptographic Failures' is everything from theoretical vulnerabilities which require millions of dollars to exploit, through to systems with no encryption whatsoever. Granted both should be fixed, but the latter is of far more real world consequence under most threat models.

My personal and current recommendation for developers is to focus on sane authorisation models - I commonly see direct-object type vulnerabilities related to cross-user/organisational access where the user is the correct role / privilege level to access a resource, but has no association with the record owner. An example of this would be a a multi-tenant web-store where an admin for the EvilCorp entity can modify products belonging to InnocentPtyLtd.

I also suspect poorly configured CORS policies might be in the top 10 in a few years time due to situations where SPA apps (who will inevitably use JWT) and traditional cookie apps are hosted using similar configs, resulting in the latter being vulnerable to CSRF-type attacks.


It's not even credibly about frequency. There is no meaningful survey done across the industry of vulnerability occurrences; they just invite and solicit contributions of vulnerability data from arbitrary sources.


This. I want a shell that will contextually spit out plaintext in interactive mode, then a JSON object when scripted or piped.

Pretty sure this is what Powershell does, but the UI just feels so damn unnatural.


I'd like most programs to implement a JSON interface, given right flags. Or maybe the existence of some env var like $DEFAULT_OUTPUT=JSON

I like this approach:

https://github.com/kellyjonbrazil/jc

> CLI tool and python library that converts the output of popular command-line tools and file-types to JSON or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts


Powershell does not use JSON. It's based on .NET objects.


Perhaps IDEs and shells could converge?


Murex does this. eg

Take a plain text table, convert it into an SQL table and run an SQL query:

  » ps aux | select USER, count(*) GROUP BY USER
  USER                   count(*)
  _installcoordinationd  1
  _locationd             4
  _mdnsresponder         1
  _netbios               1
  _networkd              1
  _nsurlsessiond         2
  _reportmemoryexception 1
  _softwareupdate        3
  _spotlight             5
  _timed                 1
  _usbmuxd               1
  _windowserver          2
  lmorg                  349
  root                   134
The builtins usually print human readable output when STDOUT is a TTY, or JSON (or JSONlines) when the TTY is a pipe.

  » fid-list:
    FID   Parent    Scope  State         Run Mode  BG   Out Pipe    Err Pipe    Command     Parameters
    590        0        0  Executing     Normal    no   out         err         fid-list    (subject to change)

  » fid-list: | cat
  ["FID","Parent","Scope","State","RunMode","BG","OutPipe","ErrPipe","Command","Parameters"]
  [615,0,0,"Executing","Normal",false,"out","err",{},"(subject to change) "]
  [616,0,0,"Executing","Normal",false,"out","err",{},"cat"]
and you can reformat to other data types, eg

  » fid-list: | format csv
  FID,Parent,Scope,State,RunMode,BG,OutPipe,ErrPipe,Command,Parameters
  703,0,0,Executing,Normal,false,out,err,map[],(subject to change)
  704,0,0,Executing,Normal,false,out,err,map[],csv
and query data within those data structures using tools that are aware of that structural format. Eg Github's API returns a JSON object and we can filter through it to return just the issue ID's and titles:

  » open https://api.github.com/repos/lmorg/murex/issues | foreach issue { printf "%2s: %s\n" $issue[number] $issue[title] }
  348: Potential regression bug in `fg`
  347: Version 2.2 Release
  342: Install on Fedora 34 fails (issue with `go get` + `bzr`)
  340: `append` and `prepend` should `ReadArrayWithType`
  318: Establish a testing framework that can work against the compiled executable, sending keystrokes to it
  316: struct elements should alter data type to a primitive
  311: No autocompletions for `openagent` currently exist
  310: Supprt for code blocks in not: `! { code }`
  308: `tabulate` leaks a zero length string entry when used against `rsync --help`

Source: https://github.com/lmorg/murex

Website: https://murex.rocks


As much as this is an improvement in many ways, using JSON for this feels like it doubles down on part of the problem with the current standard. Every tool is rendering JSON only for the next tool in the pipeline to parse it back out.


Yeah, I do understand where you're coming from and I've spent a lot of time considering how I'd re-architect murex to pass raw data across (like Powershell) rather than marshalling and unmarshalling data each side of each pipe.

In the end I settled on the design I have because it retains compatibility with the old world while enabling the features of the new world but it also behaves in a predictable way so it's (hopefully) easy for people to reason about. Powershell (and other languages in REPL like Python, LISP, etc) still exists for those who want a something that's ostensibly a programming environment first and a command line second and I think trying to compete with the excellent work there wouldn't be sensible given how mature those solutions already are. But for a lot of people, the majority of their command line usage is just chaining existing commands together and parsing text files. Often they want something terser than $LANG as a lot of command lines are read-once write-many and thus are happy to sacrifice a little in language features for the sake of command line productivity. This is the approach murex takes. Albeit murex does also try to retain readability despite being succint (which is probably the biggest failing of POSIX shells in the modern era).

What I've built is definitely not going to be everyone's preferred solution, that's for sure. But it works for me and its open source so hopefully others find it as useful as I do :)


A good binary format would be best, but JSON is already a step up - at least it's obvious where each value starts and ends. That said, maybe it wouldn't be too hard to offer a binary-serialized JSON format as well (I think BSON is the currently widespread standard)?

On a related note, I wonder if and how a pipe could handle "format negotiation" between processes? I.e. is there a way for a CLI app to indicate it can consume and produce structured binary data? Then the piping layer could let compatible apps talk through an efficient protocol, and for anyone else, it would automatically drop to equivalent JSON (and then maybe binarize it back up, if the next thing in the pipeline can handle it).


> A good binary format would be best, but JSON is already a step up - at least it's obvious where each value starts and ends. That said, maybe it wouldn't be too hard to offer a binary-serialized JSON format as well (I think BSON is the currently widespread standard)?

You can already use BSON. The data is piped in whatever serialisation format it's typed as but the type information is also sent. Builtins then use generic APIs that wrap around STDIN et al which are aware of the underlying serialisation.

So the following works the same regardless of whether example.data is a JSON file, BSON, YAML, TOML or whatever else:

  open example.data | foreach i { out "$i[name] lives at $i[address] }
The issue is when you want to convert tabulated data like a CSV into JSON (or similar) since you're not just mapping the same structure to a different document syntax (like with JSON, YAML, BSON, etc), you're restructuring data to an entirely new schema. I haven't yet found a reliable way to solve that problem.

> On a related note, I wonder if and how a pipe could handle "format negotiation" between processes? I.e. is there a way for a CLI app to indicate it can consume and produce structured binary data? Then the piping layer could let compatible apps talk through an efficient protocol, and for anyone else, it would automatically drop to equivalent JSON (and then maybe binarize it back up, if the next thing in the pipeline can handle it).

That isn't that far removed from how murex already works. Supported tools can use common APIs to convert the STDIN into memory structures, and similarly convert them back to their serialisation file formats. So if you have a tool like `cat` in your pipeline, they can use the pipe as a standard POSIX byte stream, but murex-aware software can treat the pipeline as structured data. The drawback of this is if you're reading from a POSIX pipe into a murex-command, you might need to add casting information (see below). But the benefit is you're not throwing away 40 years of CLI development:

  # Using a POSIX tool to read the data file:
  # casting is needed so `foreach` knows to iterate through a JSON object

  cat example.json | cast json | foreach { ... }



  # Using a murex tool to read the data file:
  # no casting is needed because `open` passes that type information down the pipe

  open example.json | foreach { ... }
(`open` here isn't doing anything clever, it just "detects" the JSON file based on the file extension -- or Content-Type header if the file is a HTTP URI)


Outputting raw structs would also have its own issues. What would be reasonable? Protobufs?


Protobufs requires each end of the comms agreeing to the same schema. You'd need something that transmitted key names like JSON, YAML, TOML etc. If you wanted a binary format then you could send BSON (binary JSON), and murex does already support this. But pragmatically a standard command line (or even your average shell script) isn't going to be consuming the kind of data that is going to be latency heavy to the extent that the difference between JSON or BSON would impact the bandwidth of a pipe.

Worst case scenario and you're dealing with gigabytes or more of data, then you'd a streamable format like jsonlines where each command in a pipeline can run concurrently without waiting for the file to EOF before processing it. In those situations most binary serialisations aren't well optimised to solve.


this looks like fun! thanks


I really like Reveal.js.

I feel the design constraints enforced (size / amount of text per slide) is awesome, but at the same time very limiting for technical presentations where large amounts of text data is pretty important.

Does anyone else have this issue / work around it? Maybe I suck at designing slide decks, but I just feel that the ability to break design rules easily is sometimes a must-have.


On the flipside, it's more plausible for an actor to get malicious code into a project in order to infect a target. Sure it has to be obscure enough to pass any code reviews during PR and/or involves compromising a contributor but it is possible and something I see happening in the next 10 years.

I'm also genuinely curious how many people actively review all the code they actually run. I doubt anybody but the very largest tech companies and high-end government would actually be able to afford and resources such a feat, and even then they would have DMZ-type areas to detonate unaudited software.


I agree with CSP, but as I've commented on another thread I recommend CSP _with_ other mitigation factors due to DOM/HTML injection, and browser support.


I think the best solution is CSP _and_ injection mitigations - even without XSS there is still DOM injection which can be equally damaging reputationally.

iirc, IE11 (under Windows 7, specifically) does not support CSP. I don't think CSP mitigates all XSS vectors either (`<a href="javascript:alert(1)">` for example). Sure IE11 is deprecated but that doesn't mean you don't need to account for it when building an enterprise application.

I'm curious if you can provide any details on what / how Safari was exploitable with CSP - https://caniuse.com/?search=content-security-policy indicates that it should be pretty uniform across popular browsers. If you'd prefer a private channel @yoloClin on twitter.


A CSP without unsafe-inline will block your example as well.

I agree that one should still sanitize input, at least for fields which allow HTML* , but it's obvious XSS filtering/sanitization can introduce XSS as much as not. This article is merely one example, there were enough to make Chrome give up and turn off their XSS filter. So main defence should be CSP and sanitization is just a nice-to-have.

* Because sanitized input is often saner than the nonsense users can insert when they are allowed to put in tags. Basically use sanitization as an HTML Tidy with extra filtering. Also for very old browsers.


Sorry, I just re-read my initial comment and I think the way I wrote it was misleading.

My case was about the allow-scripts directive of the sandboxed iframe, which I thought was linked to the csp mechanism, but now that I checked the documentation, it seems that I was wrong.

I basically display a random HTML document in an sandboxed iframe with disabled scripts. When you do so on Chrome and Firefox, the event listeners injected inside the iframe from a script in the parent frame still works, but on Safari it does not because all the scripts (or events) inside the iframe are disabled.

So rather that relying on this mechanism, I used DOMPurify to filter all the scripts.


Ping me on twitter. @yoloClin


Hey I'm locked out of my twitter but please reach out at Thelittleone@altmails.com


Thanks will do.


Is anyone able to provide a map of modern frameworks? With React-native, React-redux, Angular, Vue and probably a bunch of others.

I'm really not sure what's relevant now and what was relevant 6 months ago. I'm genuinely curious, but it's pretty difficult to grasp and no framework homepage is going to tell you "Don't use me, I'm about to be a dead project!" and every developer will tell you their preferred framework is the best framework.


"I can't figure out what will be relevant in six months" might have been a legitimate complaint a decade ago, but those days are long gone.

React has been a core part of the front end ecosystem for almost 10 years. Angular is more than 10 years old. Vue is the "new kid" on the block at about six years old.

React is roughly 10x more popular than Vue or Angular according to npm usage, has been far more popular in usage terms for many years, and continues to be growing faster than either Vue or Angular.

Complaining you can't figure out if React or Angular or Vue will be relevant in six months is a bit like complaining you can't figure out if C++ or Java will be relevant in six months. Yes, there is lots of advancement happening in the front end space. Yes, that's a good thing. No, it's not an excuse to act like you're paralyzed to understand the current state.


React, almost 10 years? It was only open sourced 7 years ago and definitely took time to fully catch on..


I'm truly amazed to still encounter "omg React is so new and unproven how can we possibly keep up with stuff like this?" posts. Next up: is no-sql just another fad or will it catch on?


Who said that in this thread? (Nobody, they just pointed out you are wrong about the timeframe)


Oh, c'mon, that's a bit of a stretch.

I was around in 2014, when angular 1 was starting to be a big thing.

If you look at reactjs on google trends, react gets its first bump in 2014-15, and then another one around 2017.

VueJs gets a huge jump in 2016-2017.

---

Java has been around for decades, has had much fewer changes, and is entrenched in a lot of code. In comparison, react has gone through some big changes in a very short amount of time, and most people use it to build SPAs that aren't usually mission critical.


Everyone can set their threshold for when they noticed React at whatever point they choose. Regardless of when any particular individual noticed React, it has undeniably been a big deal for years and can be reasonably expected to remain relevant for years to come. The assertion that "omg front end changes so much how can I know if React will matter in six months" is simply nonsense.


React Native and React are two different things, no? JavaScript is not Java, Programming Pearls by John Bentley is not about Perl, C# is not C, etc.


React, Angular, and Vue are the big 3 web frameworks. Of which React is still more popular than the others I believe, and it takes a slightly different approach. While Angular and Vue use string-templating DSLs to control rendering, React uses JSX which is a thin syntax sugar over JavaScript (and allows you to mix JavaScript in with it). React is also more functional. In practice I find this makes it more flexible, and easier to debug. Angular and Vue are very similar, but Vue has the reputation of being simpler while Angular is the Enterprise Java of the web view.

There are a bunch of others that are less popular such as Ember, Svelte, and Inferno which are also decent in their own right, but the ecosystem of libraries around them is smaller.

The most important libraries to know about in the React ecosystem are React-Router for routing, and either Redux or MobX for state management. There are other options, and for small apps you could get away without a state management library, but these are the mainstream options. I'm not super-up on the Angular/Vue ecosystems, but I believe they're more integrated (e.g. they provide more first-party libraries).

Almost everyone who is using these frameworks is also using either Babel or TypeScript together with Webpack for bundling. There are other options for bundling such as Parcel and Rollup, and again it is possible to get away without bundling or transpiling if you really want to, but Webpack is still the mainstream option for applications (Rollup is well-suited to libraries).

React-native is a different beast. It's a cross-platform mobile (and now desktop) framework rather than a web-based one. It uses React executed in a JavaScript VM to control rendering, but it renders native mobile UI toolkits. There are similar projects for Vue and Angular, but unlike the web versions which are competitive with React, they are nowhere near as mature. React-native's real competition for cross-platform mobile development is Flutter. Flutter is written using that Dart language, and takes a different to React-Native. Rather than compiling down to native UI widgets, it custom-renders everything. This makes it more reliable and consistent across platforms, but also more limited in what you can because you can't hook into the existing ecosystem of native ios/android libraries nearly so easily.

My subjective view on this is that React-Native is just about on the cusp of reaching maturity, while Flutter doesn't quite seem to have enough momentum to reach the mainstream. Although I'd love to be proven wrong on that one.


I don't think it's too common, but Vue does support JSX: https://vuejs.org/v2/guide/render-function.html


I don't know how often a project dies because it has planned obsolescence vs. people stop using and updating it, but for the latter case I think https://2019.stateofjs.com/ is pretty great.


Thanks, that's exactly what I want!


I can provide you with a short video clip of spaghetti falling through the air, it's functionally equivalent


HTML Code Golf - How to make really small _extremely fragile_ HTML that doesn't break Firefox or Chrome, currently at least

FTFY


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: