Jump to content
  • 0

Using Sound (in C with CC65)?


rje
 Share

Question

I'm slowly building a kind of C toolchain so I can program interesting things on the CX16.

Now it's time to start thinking about a PSG interface.  

With sprites, I've had success by defining a sprite record, then writing functions that do basic sprite things.  I think that is how I will approach the PSG as well.  I've muttered about a general PSG API in other discussions on this forum, so I'll probably dig through them to figure out where to start.

Obviously, a voice definition struct would have the voice number, waveform, volume.  Would it have EVERYTHING, including frequency?  Then the functions just take what they need from the struct pointer?  My thought is I don't want to multiply typedefs.

1. Does it make sense to have most or all data in one typedef?

2. Does it make sense to have ADSR data in there as well?  

3. Does it make sense to try to implement envelope control (in C)?  I suppose there will have to be an external "clock" that calls a C envelope manager, which means there will have to be one sound structure allocated per voice.  And yes, that sound "structure" is probably just going to be a hunk of contiguous memory that an interrupt-driven assembly routine could also handily work in instead.

4. What else am I missing?

 

Edited by rje
Link to comment
Share on other sites

2 answers to this question

Recommended Posts

  • 0

Here's my initial structure.  The initial function would therefore be this:

void defineVoice( Voice* voice );

-----

typedef struct {

    int frequency: 16;
    int channel:    2;
    int volume:     6;
    int waveform:   2;
    int pulseWidth: 6;

    int attack:  8;
    int decay:   8;
    int sustain: 8;
    int release: 8;

} Voice;

 

Then I've got things like these:

#define     CHANNEL_LEFT    (1 << 6)
#define     CHANNEL_RIGHT   (2 << 6)
#define     CHANNEL_BOTH    (3 << 6)

#define     WAVE_PULSE      (0 << 6)
#define     WAVE_SAWTOOTH   (1 << 6)
#define     WAVE_TRIANGLE   (2 << 6)
#define     WAVE_NOISE      (3 << 6)

Edited by rje
Link to comment
Share on other sites

  • 0

ADSR data, I had earlier thought, could be four bytes per voice, for a total of 64 bytes, buried somewhere around $400 or so.  It would also need a status counter for each voice, so that we'd know where the sound was along its envelope.

The status would have to be, I think, a quantized (sic?) "unit" segment, with 0 meaning the sound is right at the start of the Attack phase, and MAX_VALUE meaning the sound has just finished the Release phase.  OR I just do things a brute force way and not try to be clever about it.

 

 

 

Edited by rje
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...

Important Information

Please review our Terms of Use