Jump to content
voidstar

concept for a 21st century BASIC?

Recommended Posts

Posted (edited)

I may be laughed out of town for this, but I'm going to throw it up in the air anyway.¬† ¬†¬† ¬†I can't run very fast, please don't tar and feather me ūüôā

Attached are notes on how I'd teach young people about programming.   We can use the vintage processors, but I think BASIC needs a re-think.   Understanding *how* ROM BASIC works is fairly insightful, but I think it's where younger folks quickly lose interest.   A vintage-reboot maybe could use an alternative approach on how to program the thing.    Yes, there is certainly a very decent C compiler for the 6502.   But in a way, I consider the approach proposed here as a sort of "in the middle" between assembler and C --  because I believe the approach here could export INTENTS into both assembler or any high level language.  So you can get the "feel" for the purity of assembler (at least in that "your code uses resources, be mindful about that",) but you can also run your INTENT on your Windows box or your phone {for real, and NOT in an emulator} ), or an Arduino.

 

Just tossing out ideas,

v*

 

concept of Intention Oriented programming using virtual constructs.pptx

Edited by voidstar
  • Like 1

Share this post


Link to post
Share on other sites

I'll have to read through that, but have you considered making a video of you presenting this material? If I've learned anything, it's that there's a substantial audience for slide presentations on tech concepts on YouTube. 

Also, I humbly submit my own XCI game engine (https://github.com/SlithyMatt/x16-xci) as an assembly-like programming language that also lets people make games without programming experience.

  • Like 2

Share this post


Link to post
Share on other sites
Posted (edited)

I'm not laughing, I'm always interested in talking about value-added retro-programming.

Now your slides, sir.  With all due respect... Let me say that they are rich with detail, and I think a lot of it is useful, but perhaps not in a slide-deck sense.  So: you do need to break them down and split material out.  I recommend a triage, where you remove everything that doesn't matter.  Then, reduce the remaining concepts into tight little bullets.

Then, support your slide deck with a document that re-expands your ideas out.

 

Edited by rje

Share this post


Link to post
Share on other sites

I would also suggest sharing it in a way other than PPTX ... not everyone has or even wants to use Microsoft Office / PowerPoint.

  • Like 2

Share this post


Link to post
Share on other sites
Posted (edited)
59 minutes ago, Scott Robison said:

I would also suggest sharing it in a way other than PPTX ... not everyone has or even wants to use Microsoft Office / PowerPoint.

At a minimum, check how it out displays on the FOSS "Impress", part of the LibreOffice productivity suite. If there are problems, save it as a PPT and see if that displays better. They are working on the quality of Impress display of PowerPoint slides, but there is always the risk of a bad display on the free tools if you haven't checked it on those systems.

Edited by BruceMcF
  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)
3 minutes ago, BruceMcF said:

At a minimum, check how it out displays on the FOSS "Impress", part of the LibreOffice productivity suite. If there are problems, save it as a PPT and see if that displays better. They are working on the quality of Impress display of PowerPoint slides, but there is always the risk of a bad display on the free tools if you haven't checked it on those systems.

One step further would be to export as PDF ... this will allow one to keep the formatting in what is arguably a more open format.

Edited by Scott Robison
  • Like 1

Share this post


Link to post
Share on other sites

Thanks for going easy on me!     Some folks can get pretty defensive about their favorite tools or methodology -- in all extremes, from the:  if it needs to be done in assembly, use assembly, duh!   To Java runs on 300 billion devices, nothing wrong with it.   I haven't really run across a "BASIC is King" person, but I suspect one is out there.     

Apologies on the very much "draft" appearance - it was typed from hand notes I had from a years ago, rather than scanning them.    

Brooks Mythical Man-Month and No Silver Bullet forever sticks to my mind.  Programming is hard, no two ways about it.     It's just.... Like back in the 60s, when they got excited and celebrating shaving just 1 opcode from a function, I wish we could still celebrate things like not.  At 3-5ghz we take so much granted.   And I don't literally mean 1 opcode these days -- but I mean removing whole statements from a high level language.

For example:  Like on page 7 of my presentation - in a more expanded version, I'd show an optimization that I normally do where no stack space is needed.   Instead of declaring local redundant copies of what I'm about to put into the database -- instead, push an empty record, then populate the database entries directly.  Obviously there are limits to where that can work (if the database needs a key, or if the data has a lot of conditions that need to be checked for validation).  But if it's just a dumb vector, go for it.

 

It's just many times - I've witnessed that "feels good" moment when you refactor some dead weight out of some code, and get a 100x order of magnitude improvement -- and that nugget was sitting there the whole time, if with just a little more insight about the code folks could have seen it more clearly (something that the "profile for heavy performance hitters" didn't detect).  But, I work in that extreme where my bottleneck is the performance of L3 cache, which I know most folks don't go there -- just there are engineering situations where critical performance is essential, but understandably not in the more casual types of programming.

 

example: kid was calling get_clock() all over the place.  I think it was an age out condition -- i.e. if this table element is older than this much time, then change it's color to indicate "old data".   So he's doing get_clock() for each element in this 100,000+ entry table, multiple times (compound condition, not just that single time-field).  And, this table needed to be refreshed pronto - real time like, important data, and presently it was too slow.  What to do.   So I said, get_clock is a system call - probably doing a block, and it's certainly not atomic since this clock has several words.    Why not call it once at the top of your age_out thread, and PASS the result (by reference) down along to your functions.  The database is locked anyway, the data isn't updating their times until after you're done.  Better yet, make it a member variable, and don't even bother passing it around.

So - he said he thought of that, but he figured the optimizer would have figured all that out.     Thing is -- technically he's right, I could see an analysis done to deduce that are all the same calls within the same thread.  But this was years ago, and the compiler had no pragmas or other indicators that this was the same thread (or in any case, at least then, it didn't have the smarts to deduce that).  Or alternatively, if get_clock() had been altered and just did something static like "return 5;" -- technically right also, the optimizer might be able to deduce that and recognize all those calls return the exact same thing.  But no, the system had an actual clock.    Usually you profile and look for your heavy hitters and go after those first - but in this case, you wouldn't notice these "cuts by a thousand calls" eating away at the overall performance.  Anyway, the age out feature was rescued.

 

 

The approach here isn't really trying to hide the high level language.   It's two things:   while coding, have two side monitors being more explicit about the corresponding assembly language.  It doesn't even really have to be precise - just a relative approximation to say "you're doing 30x more instructions now than 10 versions ago, can you afford the slowdown?  if not, maybe look into how to do it better"   and on the other side  "what resources are being consumed, whose being blocked right?"   As much as I love simple vintage computers - the modern world (since 2005) is multi-core [maybe not in ultra low power application], so organically playing nice with other programs running on the local CPU is just an aspect many developers will have to deal with.  Load balancing is just something we have to deal with integration - so I appreciate programs that run correctly, but we have to keep an eye on their efficiency.  A Sloppy Jalope program that uses 50MB/s across the bus to spin a dumb globe graphic in circles -- it's not as fun to watch anymore when you see that, since it'll impact my transcoding and image stacking, etc...

 

 

 

 

 

 

 

 

 

Share this post


Link to post
Share on other sites

Thanks for taking the time to put it up in PDF.

I've long contemplated visual programming but have never come up with anything I think is "good enough". My wife works at a middle school that has a "creative coding" class (I think it's called) that is just very simple introduction that uses a system they call "code blocks". It gives them a list of the javascript primitives and allows them to drag and drop them into a "function" pane, reorder with a mouse, and edit parameters a specific block might take (like loop or if conditions).

I like the idea of decoupling the code from a "rigid" text format that allows the students to get a feeling for things before expecting them to get syntax correct and so on.

Based on my quick perusal of the PDF file, I think the problem with it is the amount of effort that has to go into the tool by experts to create something usable by the novice, and people potentially being put off by terminology like "intent oriented programming" or some such.

I've been thinking through a "modern BASIC" for the X16. My hope is to come up with something that could be in the ROM and available at powerup without having to load it from a file. Something that would "abstract" the address space of the X16 into a more linear looking thing rather than banked RAM / ROM + conventional memory. Something that could support cooperative multithreading, multiple "processes", fully relocatable tokenized code, and "virtualized" access to the hardware (particularly the display). Something that has better integrated debugging capabilities.

I don't know what will eventually come of it, as it is something I am restricted to working on in my spare time, but it would provide more functionality than BASIC v2 and provide an environment that could support multiple programs running at once. No, it won't be a speed demon, but it's intended for those who do not want to have to do everything in assembly.

As for the question posed about commercial BASIC software ... yeah, a lot. https://en.wikipedia.org/wiki/QuickBASIC was used in commercial software development. A former employer, Clark Development Company, released multiple versions of PCBoard BBS software developed using QuickBASIC before eventually migrating to C and assembly language somewhere around version 14 or 14.5 (or maybe 14.5a ... it was over 25 years ago now and my brain isn't as nimble as it once was).

https://softwareengineering.stackexchange.com/questions/149457/was-classical-basic-ever-used-for-commercial-software-development-and-if-so-ho answers the same question on a broader basis.

Now, a big part of the problem with answering this question is what qualifies as "BASIC"? Do you mean strictly line numbered BASIC running a slow interpreter? If so, then less software (though not zero). I think the development language is independent of the eventual delivered program, though. QuickBASIC allowed compiling to DOS EXE so that one didn't have to own the purely interpreted environment. Also, as we've seen much over the last 20 to 30 years, a slow interpreter is not necessarily a stumbling block to commercial software. A significant percentage of the web exists due to strictly interpreted languages (though more powerful than BASIC) such as PHP and Javascript.

Anyway, I don't mean to poo poo the ideas. Just sharing my thoughts.

Share this post


Link to post
Share on other sites
Posted (edited)

Oh yes, IMO QuickBASIC was something else altogether.    It was a game-changer and I could certainly see commercial programs coming from that.

For more traditional BASIC: The TRS-80 (CoCo2 at least) had the RENUM command to automatically renumber (and re-adjust GOTOs) in your BASIC programs.  But this wasn't on the original Commodore PET BASIC.  See also (later "external" solutions to lack of renumber on Commodore):  https://www.masswerk.at/nowgobang/2020/commodore-basic-renumber

 

Also in my notes, I was reminded about Bob Pew's notes, now maintained by David Wheeler:  (survey of languages and suitability for 6502)
https://dwheeler.com/6502/

WAY down in the above link, is a very fascinating project:    The MOnSter 6502.
https://monster6502.com/
What's most fascinating to me about it how much slower it runs than the original 6502 (1/20th or 1/50th I think) -- which is proof that "interconnects matter", the physical distance that data has to travel.  I harp on this a lot, where folks take it for granted on writing log files across a network share, with not a care in the world on what that cost ("it works for me" - yeah, but in the target integrated environment, everyone is competing for that pipe).    The servers may have 10GBps interconnects, but the end-user front ends are 1GBps straws - so shoving a 860MB XML messages down there was just not workable, mixed with all the other traffic.    Or,  like in my extreme, keep the processing in L3 cache sized chunks, as even touching main memory is too slow - the metal distance actually matters.

Which is where I'd like my paradigm to go -- to real-time visualize things like how much percentage of a network-pipe you consume.  It's fine if you need 500 bytes or 5MB -- but just be aware of it, and understand that at a certain threshold, (for example) maybe your application is no longer suitable for wireless configuration -- and "here" is why: you left in some debug logging even in the release build.

 

In a way, maybe the concept is more like programming as if you were inside a TRON-type VR environment.  It's the system perspective that programmers should have in their minds eye - except there are SO many things to keep track of, now-a-days.      Back to the earlier point on MOnSter 6502: it's admiral to build a "super-sized" physical 6502 that actually executes instructions.  But what if they virtualized that entire thing?  Model all that logic in a "CAD" or "3D" or "VR" space, and see what kind of programming-languages evolves within that space.   e.g. someone plops in an ADD instructions, STA STB to some addresses -- hey, I can "see" my program growing now, "building" software instead of "writing" it, and the system impacts can be more easily summarized (as larger structure INTENTS would correspond to a sequence of instructions).

Pretty rad.  But it also made me realize this about where 3D Printing is going:  why physical-build anything anymore, just VR model stuff, and only make it physical when absolutely necessary.    I read a lot about the 1851 Great Exhibition in London (so sad it burned down in the 1930s), where around that time was a big debate about traditional "hand crafted" products, versus stuff produced using mechanized assistance (and piano companies that were too "made-by-hand" prideful to adopt to those new processes, they died out...).  I consider that 1851 event being a focal point in that paradigm change, a clear entry into modern consumerism.  A thing made, like a fancy statue, was unique - until that decade, where they began to come up with mechanized processes to clone things (but with inferior materials).   My grandparents liked to collect Remington statues.  But in the modern age, why have physical things?  Just virtualize it in a model.  And if you really want it, 3D print it.  That's where we're headed - RIP antiques (maybe).

So I'm just wondering if, in some way, something similar can apply to software itself.  Why commit to outputting 15,000,000 lines of code (an airline ground control station, perhaps), that's mostly going to rot, not be maintained, and will become more and more of a tangled mess as more and more of the original developers move on? (so programmers are analogous to ancient monks translating the prior sacred-text to whatever modern target-of-the-day happens to be? re-inventing the wheel of some function expressed in QuickBASIC to Java)   Focus on the core intent - and represent that intent in a more approachable and maintainable fashion (some shape in 3-space).  And when it's ready - when you really need it -- "print it" to the language suitable to your target (whether embedded space/medical device, desktop, "mobile").  The concept here isn't a new language, but a 3-space visualization of the language (to help trace the relationship between declarations, regardless of what the names are across function calls, and to help visualize the growing resource-requirements of that code - to help avoid being excessively sloppy from the get-go, or why running this ROUTE PLANNER and this TRANSCODER at the same time will be a painful experience on THAT target, etc.)

 

EDIT: Code:Blocks is fun - just, I think it's important to depict that code runs relative to a system. Don't take for granted that you even have a display, or that you have a 5G pipe, or that you have 3+ GHz, and scaling to your number of cores without hogging the system, etc.    So much to track... I certainly understand that introductory tools need to be very bare bones.  Just I think with a relatively simple system like the PET, that's kind of perfect to swing in these concepts -- a couple registers, a small address space, some IO ports...

 

 

 

 

 

v*

 

 

 

 

Edited by voidstar

Share this post


Link to post
Share on other sites
1 hour ago, voidstar said:

Oh yes, IMO QuickBASIC was something else altogether.    It was a game-changer and I could certainly see commercial programs coming from that.

For more traditional BASIC: The TRS-80 (CoCo2 at least) had the RENUM command to automatically renumber (and re-adjust GOTOs) in your BASIC programs.  But this wasn't on the original Commodore PET BASIC.  See also (later "external" solutions to lack of renumber on Commodore):  https://www.masswerk.at/nowgobang/2020/commodore-basic-renumber

Also in my notes, I was reminded about Bob Pew's notes, now maintained by David Wheeler:  (survey of languages and suitability for 6502)
https://dwheeler.com/6502/

WAY down in the above link, is a very fascinating project:    The MOnSter 6502.
https://monster6502.com/
What's most fascinating to me about it how much slower it runs than the original 6502 (1/20th or 1/50th I think) -- which is proof that "interconnects matter", the physical distance that data has to travel.  I harp on this a lot, where folks take it for granted on writing log files across a network share, with not a care in the world on what that cost ("it works for me" - yeah, but in the target integrated environment, everyone is competing for that pipe).    The servers may have 10GBps interconnects, but the end-user front ends are 1GBps straws - so shoving a 860MB XML messages down there was just not workable, mixed with all the other traffic.    Or,  like in my extreme, keep the processing in L3 cache sized chunks, as even touching main memory is too slow - the metal distance actually matters.

I've looked at all those resources recently as I've been researching for my own BASIC successor, but I am in no way trying to go as far as your suggestions with it. I want to see a more expressive language that doesn't have as much interpretation overhead as BASIC, but I'm not trying to get to zero overhead. Assembly has its place, as does C and other compiled languages. I just want to see something that can make for a friendlier / more structured experience, where portions are "compiled" or tokenized in advance (lets take the simple example of numbers in interpreted BASIC which are represented as an array of PETSCII digits that have to be converted each and every time the line is run; there should be an tokenized format for that that can preprocess the digit sequences to binary numbers that have no runtime overhead; also long variable names that are more distinct than two character variables but that can be stored in a compact format so that the runtime doesn't have to search for the long names while running).

Anyway, I have given thought before to doing an XML or JSON based "language" that represents all the usual constructs in a normalized manner, but that has a rigid definition so it could be easily translated into "any language". But even that isn't necessarily visual.

The saying is that a picture is worth a thousand words, and I do consider good programs works of art, but there is an expressiveness that has to be understood by both the human and the computer. I may just not have sufficient imagination, but text based languages are that medium that provides for just the right mix of concrete and abstract that allows us to work effectively with our tools.

I can see certain types of visual tools working well for narrow types of tasks on sufficiently fast processors with enough memory and bandwidth. I don't see how it could work well in a retro environment, though, unless we want to say "you have to give up actually developing on the machine".

I do agree that there is not enough consideration given by many developers today to how their code impacts the hardware. They've been taught to rely on garbage collected languages with big heavy libraries because programmers are bad at managing things like memory (though they never seem to think who used what language to implement their preferred language).

Anyway, best of luck. I look forward to more fleshing out of ideas with some sort of prototype so that I can better understand what you're suggesting.

Share this post


Link to post
Share on other sites
Posted (edited)

I think I've rationalized out why this idea won't work.

The Phoenician did a good job with the alphabet, writing hasn't changed in thousands of years.  The media of stone, papyrus, paper has.  But fundamentally, to note an idea down for an enduring purpose, writing still works.   Now, ~1900, we started to have movies.   And while "a picture is worth a thousand words" indeed, it's still the case that "the book is always better than the movie" - since in writing, you can express the UNSEEN details of what characters are thinking, feeling, and surrounding details of a scene (the reason that spider lairs in that cave), etc.

So, even in movies, it begins with writing: a script, storyboarding, etc.   In programming, we use "notepad".  Our thoughts, our functions, don't need to be complete.  We can stub things out, move things about.  If I feel two spaces here helps my thinking, so be it.   Plus, across platforms and systems, the "notepad" is one of the easiest applications to port -- and needs little-to-none training.  As intuitive as paper and pencil:  grab a sheet, and start drawing.

BUT...

When software, a program, is realized into machine code - it becomes a Clockwork/Steampunk kind of device, with levers and spinning things.  It reminds me of the scene in Back to the Future 3, where Doc created that massive scary machine, with belts and smoke, to make ONE sad looking ice cube.  Or, like the original Charles Babbage machine, the orginal "program" constructed.

And that's my point: when we compile software, it translates into a precise machine that consumes time and space, and competes for resources that other programs want to share.  And I've seen some horribly inefficient code - apparently, not everyone took an Algorithms course or is aware of Binary Trees.  When the display team says they're "600% over budget" to have those features - yeah, with code like that, no surprise.  But this is nothing new, this dilemma is why there are multiple high level languages - as they try to decrease that gap between expressing extent and having it realized efficiently.

So that was the gist of the idea:  write code, as-is.  But at the same time, have the code virtually rendered in 3-space, an island clockwork/steampunk type machine to depict the in's and out's, dependencies, and it's overall "bulk" and efficiency (it doesn't have to be as precise as machine-code, just nomalized to show relative complexity to all other code).  The UNSEEN details, however, remain in the code.   And from this, maybe it helps management give better insight into what there software projects are doing - they can see the birth during the Zero-to-One transition (I don't mean BITS, I mean from NOTHING to VERSION 1), and the subsequent growth thereafter:    e.g. we're integrating the compression feature today (and watch this chunk of an island migrate over to the main code, and see the connection points -- and what "cost" that compression entails, as the combined structure is now physically {but virtually} too large to fit in a Type-1 processor, etc.).

So - coding as we know it, that free-form notepad, has to stay  [ even Microsoft has to darn near give Visual Studio away for free; any neat-tool you try to build can't compete with the zero-cost of these existing tools - that's the other reason UML failed: exotic $10k/seat tools that needed on-site consultants to use, "NO TY, get out bish" ].  But the real-time Situational Awareness on the dependencies and resources being consumed,  I think needs some attention (and I think training younger folks to keep those aspects in mind, would be a Good Thing - we're coding to a specific system, that has constraints, even today don't take it for granted that you have a Virtual Infinity computer - try and allocate 512GB of RAM to store full world DTED, so we can do full spectrum line of sight computations, see what happens even on your glorious 64-bit machine).  

We'll get there, when we need to.  Afterall, we flew a helicopter on Mars recently and saw HD video from it - amazing software is getting done, everyday.  While there is a painful shortage of software talent, it's just that there are so many exciting things we're collectively chomping at the bit to get done.  We'll get there...

 

NOTE: I'd say writing pure assembly is PROGRAMMING, but it is not SOFTWARE.  Yes, it's symbolic.  But you're effectively running patch cables to specific addresses and twiddling bits/knobs, which is admirable to watch it done by a professional of that system.  The defining aspect of SOFTWARE is that aspect of portability across platforms, with very little adjustment.  Though clearly, there is a subject threshold to what "very little" means.  So there is a distinction between a Computer Programmer (a mechanic of sort) and a Software Engineer.

Edited by voidstar

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, voidstar said:

I'd say writing pure assembly is PROGRAMMING, but it is not SOFTWARE.  Yes, it's symbolic.  But you're effectively running patch cables to specific addresses and twiddling bits/knobs, which is admirable to watch it done by a professional of that system.  The defining aspect of SOFTWARE is that aspect of portability across platforms, with very little adjustment.  Though clearly, there is a threshold to what "very little" means. 

This is incorrect.

"software" is a collection of programs that run on a computer. By definition, computer programs are software, and software is computer programs. 

There's nothing in the definition of "software" that requires it to be portable: the algorithms expressed hand-wired patch cables used to program early computers, such as ENIAC are just as much software as a modern C++ program. 

As to "Intent Oriented Programming": I'd argue that what you're really arguing for is more formal Software Engineering. The software industry has been permeated by people who would rather code than graph out a problem, and this has the result of creating software with gaping holes in its design, huge bugs in its implementation, and inconsistent design patterns throughout. 

We don't really need to invent new terms and methods for the industry. Instead, we need to apply disciplines that have already been created. Software engineering is a mature science - but most "software engineers" are not engineers at all, but rather code monkeys. 

There's a reason my college has a Master's program for Software Engineering, which is a completely different discipline than coding. 

 

 

 

 

Edited by TomXP411
  • Like 3

Share this post


Link to post
Share on other sites
Posted (edited)

Wikipedia has a standard definition of "software", basically in line with Tom.

 

As far as software goes: it will ALWAYS be harder to read than to write.  

I think this is a fundamental difference between programming and writing.

And I don't think that will change, unless you can simplify requirements gathering and deciding.

 

Edited by rje

Share this post


Link to post
Share on other sites
Posted (edited)


printf("Hello World");

The above SOFTWARE manifest into some combination of sequences that apply 5v for THIS system and another combination for THAT system - one PROGRAMS those systems, via those combination accordingly.

The first use of RAM was what, 1948 (Manchester Baby) maybe?   After that point, then it was resident-memory SOFTWARE that enabled the RE-PROGRAMMING of all that wiring (otherwise it was just a fancy switchboard).

These are collectively just casual thoughts, an opinion.  Such is the nature of non-networked brains, we each have our own unique perspectives on things, accumulated from respective experiences.


My Masters was in Computer Engineering at the University of Florida.  Maybe the extra Digital Logic and Microprocessor courses gave me the tad more-than-usual appreciation of the hardware.  But you are quite right: the science and discipline for good Software Engineering is there.   I emphasize often the importance of Design, and have to take certain mangers to the side:  Don't jump to that coding phase so fast.   Preliminary and Detail Designs seem like a lost art.    Yes, "code is king" in the business world - but absolutely there is a wisdom in spending the bulk of budget on Design Artifacts for work that is intended to endure.

NOTE: We've debated about that... "If our headquarters burned down, would you rather save the code or the designs?"  Out of 10 people, I'm the only one who said Design.  And I got a beating, "The code is what runs!"  Yeah, but.... Couldn't win.  (it's a thought exercise, obviously everything is triple backup geographically separated and all that - I think one backup is even in orbit)

 

Here's a weird thought:  It once occurred to me that one could write every possible program for a system.  01, 10, 11, 100, 101, 110, 111, etc.  Walk the combinations from 1-byte to megabytes, and literally every possible program for the code-space of that system could be auto-generated (except of course you'd run out of space to hold all those combinations anywhere).   It's not a very useful thought, but I still find it amusing.   Could the most perfect PROGRAM be hiding somewhere in there?  (contrast to doing so would never reveal the perfect SOFTWARE)

 

 

EDIT:  I'll rescind my thought on pure-assembler, but for the following reason:  assembly absolutely deserves all the respect and legal protection as any other type of software.  No reason to confuse lawyers about that (not that there was, but the principle remains).  But I will  still simply say: assembler is software of a different sort.    Some software is very system-purpose-focused (e.g. hardware drivers), while other software is more abstracted from system-specifics (generally involving some kind of compromise, maybe in performance, but with a general benefit of broader portability).  

EDIT2:  But on second thought.... There are multiple ways to execute instructions.  I can sit there with wires myself, poking 5v on the bus of lines going into a processor (I couldn't do it fast enough, but the principle remains).  I could use the presence of bubbles (see Bubble Memory).  I could use smoke signals  (giving another meaning to the word VaporWare!).   Or, use RAM.   All of those entail a specific medium and a specific combination, to PROGRAM that system.  But at what point does it transition into SOFTWARE ?  I can Copyright my ASM code.  But can I copyright my hand-motions (of applying 5v here and there) also?   If I could twist my fingers fast enough, like gang gestures, that could represent hex codes that a processor understands - is that SOFTWARE?


 

Edited by voidstar

Share this post


Link to post
Share on other sites
Posted (edited)

A slightly older peer suggest what I am proposing was called Mainstay VIP for the Macintosh.  In a way, sort of.   I'm ok with scrubbing that notion of "intention based programming" - done with that (it was a means to perhaps more easily get towards what I was really proposing).  I'm proposing more of an enhancement to existing IDEs, as "add-ons" are more approachable.  We're long past 80x25 screens to code in.  Use my 7 other monitors to give me Situational Awareness about my program.  If I could just "look behind" the code, at an angle -- and see all that coupling, dependencies, etc.  We reach that point in our minds, offer a way for others to reach this point more quickly, to see ramifications of both design and implementation decisions.

 

NOTE: The 1943 novel The Fountainhead was such a good story about the pursuit of perfection in ones craft.

 

EDIT:  Interesting to me that we have this term "code monkey", as it is similar to the notion of "wrench monkey" in other areas (I think that was depicted in the TV series "The 100", where a "wrench monkey" had to construct an escape vehicle in secret-- as the situation demanded someone who could just get it done, quickly).    Or analogous to how "mechanics" were treated by WWI pilots.  They might not know the science of the gas, ignition, pressure, but they can tear down the engine and rebuild it to fly yet one more day.

Edited by voidstar

Share this post


Link to post
Share on other sites

Speaking of engineering, I feel like scrum has been a plague in many ways. I'm not opposed to agile, and I agree with the manifesto. It is just what some companies have done to agile by the name "scrum" that really bothers me. Agile is supposed to do away with certain things that scrum seems to double down on. There is far too much "no need to think about the problem because we'll just throw it away later, we only need to do the minimum work to achieve 2 week sprint goals". The idea that code will be thrown away becomes a self fulfilling prophecy.

This is not to say that ever detail of "Formal Scrum(TM)(Patent Pending)" is bad, but IMO they just are trying to replace one set of often bad practices ("Formal Waterfall(TM)(Patent Pending)") with another.

  • Like 3

Share this post


Link to post
Share on other sites

Sorry to drivel on, truly.  But ONE last thought, and we'll leave this to future trolls.


My "proof" that a better-than-high-level language programming paraigm can't be built - is based on the notion that, for all these centuries, we haven't come up with a better system than WRITING to communicate ideas.   Movies are nice, but they don't (for the most part) convey the UNSEEN details (of feelings, thought, rationale, etc).   

We did come up with Calculus centuries later, as an abbreviated way to express some mathematics, and that helped dramatically.  So originally I pondered if a similar thing could apply to software - standard symbols for loops, threads, data-streaming, etc., to more efficiently express intents (instead of this Babel of programming languages).


BUT.... What about Augmented Reality?


I can wear some Google glasses and walk around a city, and above everyone's head I could see their Academic Status and Financial Status -- "books" and "$$" signs floating above everyones head (augmented by the glasses), or maybe symbols indicating topics-of-interest, clubs.   To know the names and criminal history of everyone around me, or just to know a recent history of what books they've read - maybe call this system Ice Breaker (not that Cyberspace weapon... nevermind) - or call it Deal Breaker? ("danger in that crowd")


Clearly, in contemporary times few would opt-in for this, for privacy reasons.  But it is a possibility that AR offers that never existed before - pro-ject right onto our optics, chip in eye, no glasses at all -- to augment reality with what was previously UNSEEN.   [ and that's interesting to me how WRITING might be changing - are emoji's a return to hieroglyphics? or being able to inject HYPERLINKs as footnotes, anywhere, etc., we can in theory elaborate on any specific point  {which is why "conversation" is so dangerous: I can't pause and clarify things said, nor backspace... poor politicians} ]


That's all:  maybe apply some AR to coding, to somehow show a "weight" of that code (runtime, resource usage, coupling, etc.) in context of the ACTIVE TARGET platform.  Don't muck with the existing flow of writing, parsing, linking, etc -- but offer more SA (Situation Awareness) about the "cost" of that code, the "unseen" attributes of code in some more standardized fashion than #pragma sections:  faint shapes lingering "behind the code" to indicate the relative "bulk" or "weight" for the target system.  I can appreciate a Software Purist perfecting class-relationships, but there are often missed target-specific nuances.   Still, to what end?  that "shape" is what it is - still no useful insight into how to improve things.  But at least the cost is not masked, maybe helping during integrations to see why what works in isolation now fails?

 

This is pertinent as software and microcontrollers become even more part of our lives -- from being embedded into hypersonics with 2lb payload limits, to being injected into our bloodstream, or literally woven into the fabric of our clothes.  "perfect software" to scale to all these targets will remain a necessary work.

 

Thanks for the discussion - yes, I tend to agree, existing Engineering Discipline - if followed - should cover all this.  We build (to the platforms we know about), we test, if it doesn't fit, we Spiral again.   Platform/targets are going to migrate/evolve,  business can't chase those possible futures with any single "perfect-expression" of an algorithm or intent, relative to all current and future targets.  // v*

Share this post


Link to post
Share on other sites

It is an interesting idea if we could come to some level of agreement as to what the various shapes / colors / non-textual cues meant. We have a problem with evolution of language already. Look at how people are beginning to object to the terms "master / slave" when used in a technological context. The words have legitimate meaning, yet culturally we evolve language to mean more or less than it did previously. We change the pronunciation of words. An excellent example is how American's used to pronounce "DATA" most typically as "dah-tuh" before the late 1980s, but we've shifted to "day-tuh" since then. Some credit Patrick Stewart's British accent as driving that over seven seasons of Star Trek The Next Generation. Other examples are harass (is it "har-ass" or "hair-iss"?) or err ("air" or "urr"?)

Extending that to shapes, colors, iconography, look at the typical "save" icon: a 3.5" diskette. Mainstream computers started abandoning it circa 1998, yet we still have it to this day, and a generation of computer users are likely as unknowledgeable about the significance of the icon as they are about a rotary dial phone.

Written language has, as you've said, the ability to include background information through exposition, parentheticals, asides, and so on. A good text editor can take source code comments and squash them out of the way so that you can view the code without the "extraneous" noise, but then you can click on something to expand it when it is useful.

As for ways to "embellish" programs, I think comments are the "best" (for some sufficiently fuzzy value of "best") way we have to augment the significance of the associated code. If we had smarter tools that could extrapolate common idioms into automatic comments, I could see something potentially useful there, but it seems like a Very Hard Problem(TM) to solve.

C++11 and later have "constexpr" expressions. I don't necessarily love the keyword syntax, but the idea is that they are more constant than a "const" (which isn't really always constant, but often is simply used as a synonym for immutable). A valid constexpr function can be used as an initializer of a value or array dimension or in other similar "real constant" contexts. Why not have a compiler / environment that, in addition to providing compile time evaluation of functions to constant values, somehow also did compile and / or link time profiling style analysis? Something that didn't require you to actually run the code but still provided "hot spot" identification of the generated code. That is also a Very Hard Problem(TM) but I think less so than AI based identification of code translation to automatic comments, as it were.

  • Like 1

Share this post


Link to post
Share on other sites

If you don't consider that most home computers back then didn't have FPUs (making the the sole Floating Point data variable type a hair-tearing bug rather than a feature), the biggest problem of BASIC was that it was purely sequential, aside from specific jump commands.  Using commonly accepted programing conventions inherited ForTran and COBOL, if you needed to patch in more than ten lines between any two existing acceptable lines, you had to rewrite the subroutine completely from scratch, and God help you if the result then impinged on the line space of the subsequent subroutine, resulting in cascading rewrites and even more hair tearing.

There's a reason the programming world has moved on to procedural programming paradigms, and further developments from there.

Personally, I would prefer the development of a 21st century version of LOGO.

  • Like 1

Share this post


Link to post
Share on other sites

Quick note: various peers are emphasizing "rust" as the solution to all challenges in modern programming, especially that of managing parallization of task.   While it's been around nearly 10 years, just recently I think five "big shops" (google, microsoft, etc.) have more thoroughly endorsed rust.    As one tiny example, it has an "unsafe" keyword.  I don't mind learning yet-another-language, it's just that it "feels" to me there is some way to express intent.   In compiler courses, we were taught that all languages are fundamentally rolled up into an "abstract syntax tree" and the nodes in that tree are decorated with the important elements of the syntax.  Everything else is just syntax-surgar.  So it just always seemed to me: why not program directly in that tree, as the "normalized" way to indicate intent.   (or have this tree virtually constructed and available to be viewed - as threaded processing is fast enough to do this in the background -- so one could really program in any language at any time, and see it manifested into the same AST).   But yes, all easier said than done.

I'd like to step back a little bit and return to my original title of the this thread: a "21st century BASIC."  I overly deviated here to try to express some whole new paradigm to solve all ranges of software, from embedded real time to casual desktop applications that calculate my Amazon purchases -- maybe nice ideas, when they become necessary (maybe rust is one step along that path -- it's been touted as "the language for the next 40 years"), but distracts somewhat from what I really meant by a "21st century BASIC."


If one were to re-introduce the Commodore 6502 with modern components, that's very cool.  But BASIC came about because there was a "big computer" (the PDP-1?) to help emulate the newer/smaller systems.  Back then, a computer in every home was not yet a thing - so the new PC had to have some very-light weight practical development capability, and 2K ROM BASIC was the answer.  But now-a-days, we have i7 laptop super computers for -- what, $300 used?   So in that context, if we could "do it all over again", would we stick with BASIC?  I understand part of it is the vintage experience - so yes, 1MHz 50-something odd instruction, 40x25 screen, and 2K ROM BASIC.   So, no, I don't want to deny that experience and being able to run all that old code -- make an option (plug in that ROM).

But at the same time - in the interests of appealing to newer/younger generation - could we do better?  We could model that whole 6502 in VR.   Maybe we do "visual assembly"?  To better convey that idea of working directly with a "system" -- specific registers, specific addresses.   Instead of a Code:Blocks with IF blocks - maybe do specific assembly-instruction blocks?  No, nobody seriously hand-writes assembly anymore - maybe hardware driver folks, or maybe in extreme situations game-developers do (and in Aerospace, we do, on occasion).    [ although it fascinates me the nature of the "problem solving" needed when you're very resource-poor; I think that really helps some critical thinking skill -- with clever ways to twiddle bits in registers and avoid accessing main memory at all -- which I think is lost in high level languages -- but I understand its the point: focus on what you want to do, less on the precision of how it is done with your system ]

In 1977, learning assembly and the system was a daunting experience - lots of manual reading (set this mask to indicate I/O direction, pulse this address, what?).  But now that we have a "big computer" right next to my vintage computer, maybe we can create a better learning and development environment?  But, it has to be fun.  Maybe BASIC wasn't fast and had limited scope to what it could produce - but it was "fun".  In a few minutes, I'm reading from the keyboard, doing a calculation, and PRINTing some results.   Maybe it's like a bicycle with training wheels - it's a safe way that we all start with.  Although, some might argue it's a horrible way to teach kids how to ride bicycle -- better to stick them in a downhill field and practicing "gliding."


So I think somehow teaching or emphasizing the code runs in a system - is important.  Don't take for granted that you can read keys from the keyboard, somehow visually depict how that's happening.   Maybe when you run the program, show in a VR higlights of signals going thru the processor: how it goes thru chips to talk to the monitor, or those PIA chips, or how you're only using 5 out of 12 RAM chips, and watch your use of the system grow.

I'm not sure if such a thing would inspire - or just confuse - younger folks.  But it's in the direction of what I meant by a "21st century BASIC."  With a relatively simple system, we should somehow be able to depict the resources that are being used as our code is running.  And, in time, we could (virtually) swap in new systems -- and see where/why adjustments are needed (or perhaps those adjustments happen automatically).


Dinner time.  Cheers!


 

Share this post


Link to post
Share on other sites

I think radical change is hard to achieve. I'm not opposed to the thought exercise and would be interested in concrete suggestions on how it might look, but I just don't have the mindset myself to come up with what that radical change would look like.

Part of the problem I've alluded to previously is that we all have different perceptions of "imagery" based on our cultural backgrounds and physical limitations. Text is a standard encoding that can be tweaked to enable many people with visual perception problems (poor vision that requires larger fonts to read clearly, or color perception that requires eliminating various combinations of foreground and background colors that are impossible to differentiate for some). Then we have various abilities to identify abstract shapes (think imagining what clouds look like as animals or other shapes, or picking out constellations and giving them names and mythologies). Yet not all people will see the same shapes or what have you.

In order to communicate ideas effectively, there must be a shared background that we agree upon. Textual programming languages are not perfect, given the proliferation of varieties, but at least they all build upon an almost universal shared experience of alphabet and language used to communicate ideas. They are one of the reasons we have made so much advancement since hieroglyphs, petroglyphs, and cave paintings were the primary means of recording information in a form more tangible than the spoken word. The fact that we can define rules that allow us to translate high level languages simply into a binary format that a computer can natively understand is a useful property.

I look forward to innovation. I'm just not the right guy to invent it, probably. ūüôā

Share this post


Link to post
Share on other sites
On 5/18/2021 at 7:09 PM, Kalvan said:

Personally, I would prefer the development of a 21st century version of LOGO.

I'm not going to go on and on about it, but I've always felt LOGO could be developed further.  As a kid it really helped me understand the link between math and geometry.  Later in life, I'd come across geometric proofs for problems that weren't necessarily geometry-related to begin with (mostly on YouTube math channels), and I'd think, "I wonder if you could've used LOGO to arrive at this."  It represents such an interesting way of looking at things, I heartily second the idea of a 21st Century LOGO.

Now you've got me thinking about LOGO (even a non-updated old fashioned one) for the X16...

  • Like 1

Share this post


Link to post
Share on other sites
Posted (edited)

PEN DOWN
LEFT 90
FORWARD 100

ūüźĘ-----------------------------

Edited by kelli217
  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


√ó
√ó
  • Create New...

Important Information

Please review our Terms of Use