Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Still annoyed that PowerShell didn't follow POSIX standard for arguments, at a time when MS was working hard on open-source compatibility.


It’s unfortunate that Go’s standard package `flag` doesn’t follow the standard either, given the language is otherwise a good fit for command-line tools.


By standard, do you mean -s for short flag, and --two_dashes_for_long_flags?

Because if you don't care about chaining together short flags and just want to use two dashes for your long flags, Go will happily accept that.

https://golang.org/pkg/flag/ : "One or two minus signs may be used; they are equivalent."


But help prints single dash for long flags which contributes to the fall of double dash long flags.


Oh, I agree.

I ran into a related issue a couple of years back where people were using single-dash flags for a C++ project that was using Abseil flags in conjunction with getopt parsing of short flags (for legacy reasons). Why were they using single-dash flags, despite that not showing up anywhere in our documentation? They copy-pasted from --help.

(I'm happy to say that --help in Abseil has since been fixed.)


But that doesn’t preclude mistakes by collision (N short flags match a long one) or unpredictable bugs in a long flag interpreter (a short flag being a substring of a long one)—both being trivially common bugs when this ambiguity is allowed, especially when an API is ported to another environment with less tooling standardization around interpreting the input.


Go doesn't allow for specifying multiple short flags all run together, or for flag args without spaces, so neither of those are directly relevant here.

Also, that first issue happens with POSIX flags (with the GNU long flag extension, anyhow): `grep -help` is different from `grep --help` (and if you type the former, it'll just wait patiently for you to close stdin).


Because of Go, I have to monitor what language a command-line program is written in before using it.


>at a time when MS was working hard on open-source compatibility.

You mean around 2002-2006? I find that pretty hard to believe.


I feel like the open-source compatibility paradigm really started right after PowerShell


It was probably even partly because of the reception to PowerShell.


/Options are the norm for Windows (and uh cough VMS).


Which is also why Windows uses backslash (\) as their path separator. Because forward slash would have collided with the slash option marker Windows inherited from VMS.


That is surprisingly false. Microsoft operating systems use both / and \ as a path separator, going all the way back to DOS.

Early versions of MS-DOS made it a user preference option in the command.com interpreter, whether the user wanted to use / for options and \ for path separation or vice versa.


IIRC the "forward slash" convention for Windows command options traces back to DOS, not VMS. Where DOS inherited the convention from I do not know.


CP/M https://en.wikipedia.org/wiki/CP/M

In longer words: Windows was originally a GUI system on top of DOS which was influenced by CP/M. The NT kernel did away with DOS, but the influence still lives to this day. For a simple one: not being able to name a file "con" (or any capitalized variation) comes all the way from CP/M.

For the uninitiated: OSes from that era didn't have "directories"; Everything lived in the root of the drive, including device files. So, to print a file, you could literally do something like:

    A> type FILE.TXT > PRN
When DOS added directories, they retained this "feature" so programs unaware of what directories were could still print by writing to the `PRN` "file". Because of "backwards compatibility", NT still has this "feature" as well.



One thing VMS got right is that each binary declared its supported options and the shell could tell you what they were. And it would take any unique abbreviation.


Powershell scripts and cmdlets work similarly. They probably won't have help text but at least you can see what's available without having to look at the argument parsing section of the script. And you can use the shortest unique prefix as the short form of an argument (though I don't love this since adding an argument can break the shortened form of other arguments)


It’s easy (although verbose) to add help text, and valid options, too.


Also easy to create option sets so that mutually exclusive arguments are shown in the help as different ways to invoke the script.


And bunch of other niceties, all queryable without running the script, and all feeding autocomplete with useful information:

https://news.ycombinator.com/item?id=26748549


..and TOPS-10 and TOPS-20 and RT-11 and RSX-11 and RSTS-11.


They were for DOS, too. Not that I’m disputing the VMS roots of WNT.


What would be a good reason to have POSIX standards in PowerShell, aside from, that's what POSIX does?


It'd make the typing simpler. PowerShell has posix-like aliases, like 'rm' and 'cd', but they don't accept POSIX parameters. So you end up with "rm -Recurse", since rm is an alias for Remove-ChildItem.


I like PS in theory but the syntax and naming just absolutely kill me. What were they smoking when they named as simple an operation as delete "Remove-ChildItem"? And what's with all of the capital letters?

That's what happens I guess when the people designing it haven't actually used a CLI day to day much, because, well, they're using Windows.


I can't agree. I have used Linux shells for some time (since 97), and while the olden days would be me laughing at vbs and all that awfulness, I'd take PowerShell any day.

The short terse commands and the really awkward, confusing, mistake prone syntax of sh or bash really reels their ugly head in scripts.

Interactive shell? No problem. But that's the beauty of PowerShell: verbosity and correctness in scripts, where the IDE quickly expands those long commands, and short aliases for interactive use.


> The short terse commands and the really awkward, confusing, mistake prone syntax

When used in an interactive shell short commands save time and effort. And it is easy to learn and remember them because in everyday work you need only about 10 commands. For some some commands which I use a lot I have one-two letter aliases to type even less e. g. i=fgrep.

It makes shell scripts less readable for someone who come from windows and and don't know even common shell commands, but for someone who use shell at least from time to time it should be easy to read.


Yeah I agree with that. Bash (and friends) scripts are awful. PS scripts are nice and readable, and not subject to the insane quirks of bash ([ vs [[ vs test? come on)

Seems like the real solution is separating scripts from interactive use.


Ironically it already happened: bash for user interface, but /bin/sh is something else. But bash for user interface keeps being a repl that was accidentally promoted to user interface.


> What were they smoking when they named as simple an operation as delete "Remove-ChildItem"?

Simple. All these commands work with providers, of which a file system is just one. Other providers include Windows Registry, environment variables, certificate stores, functions and variables in PowerShell runtime. More providers can also be created and plugged into the system. PowerShell Providers are essentially Window's FUSE. See [0] for details.

So, for instance, you can do `Get-ChildItem HKCU:` to list entries under HKEY_CURRENT_USER in the Registry, the same way `Get-ChildItem C:/` will list you top-level items on the C: drive. Worth observing: while the console output for these two commands is similar, the results are in fact different objects underneath (Microsoft.Win32.RegistryKey vs. System.IO.FileInfo).

In short, these commands are an abstraction over file-system-like things. Whether or not that was a good idea is a different question.

--

[0] - https://docs.microsoft.com/en-us/powershell/module/microsoft...


It makes a little more sense in context to me. The verbose Verb-Nounish works because the verbs are designed to be limited. E.g. there's Remove- but no Delete- in the standard (shown in `Get-Verb`). So you can then press ctrl+space after typing Remove- and see all the different types of things you can remove. Too many, so you can filter to Remove-<prefix>* etc. The verbosity of cmdlet names when using it as a shell is mitigated with the aliases (e.g rm), and the parameters by accepting any case and shortening to anything non-ambiguous (e.g. `rm -rec -fo`). I guess the capitalisation comes from C# or .net's casing? I like PascalCase for it's great readability/conciseness tradeoff over others, and it's standard windows case-insensitive so I've never had a huge issue with it.


The tradeoff is that "all the things I can remove" is usually "the set of all things my shell knows about" and not "the set of things related to my task at the moment" -- ChildItem-* would be more helpful!


Neat thing you can do is type "*-Noun" and the tab completion will give you options that fill in the "*". Alternatively "Get-Command *-Noun" will also list out all of the matching commands. Get-Help also supports that kind of wildcard so you get the list of commands along with their help summary.

The "*" can even be in the middle. I open VS solution files all the time from Powershell. Since there are often many other files and folders with similar names alongside them I just type ".\*.sln" and hit tab.


> What were they smoking when they named as simple an operation as delete "Remove-ChildItem"?

The long names are the official readable names for scripting. It can and does have short aliases like "rm" that you would use in interactive mode.

> And what's with all of the capital letters?

PowerShell is case-insensitive. The capital letters are for readability.


I disagree and agree with the sentiment. As someone more familiar with Linux, I sure would prefer to be able to assume a similar style.

But the biggest thing I'm happy about WRT Powershell is that it's consistent (and pretty well documented). At least it makes sense. Batch scripting really didn't.


Just annoyingly inconsistent when calling PowerShell commandlet vs local exes.


PowerShell is different enough that maybe it's not a bad thing?

Seeing functions aliased to their POSIX names is already a little bit misleading when you realize they are not a drop-in replacement at all.


PowerShell was born with a “we know better” attitude that, I hope, is gone by now.

Because they really didn’t.


Except they did, and I for one wish traditional Unix shells would die. Composing software by having every single program and script include a half-assed parser and serializer is causing a lot of unnecessary waste and occasional security problems in computing. Moving structured data in pipes is just a better idea.


Then use JSON or XML in those pipes. Nothing forces you to deal with unstructured data.


Wish I could (actually, I'd prefer JSONB or other binary format). Unfortunately, every program in the UNIX ecosystem assumes unstructured text in pipes, and makes it my responsibility to glue them together by building ad-hoc parsers with grep, head, sort, sed and awk.


A lot of more recent programs (such as AWS, K8s tools) can easily output JSON. You can make schemas match, but you'll most of the time need to use something like jq to transform what one program outputs into what makes sense for the other.

I always try to design my tools with a "terse" output that makes it easier to pipe it into other programs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: