The word asemic means without semantic context (“a” meaning not and “semic” being from a similar root as semantic). Asemic writing means something that looks like writing but doesn’t necessarily convey information. You can think if it like abstract art – we’re interested in how the writing feels/looks more than what each of the characters means or how closely it represents something. Eventually I plan to expand this project into procedurally generated writing systems, but I figured I should start with the basics of generating and rendering imaginary writing systems before we get all fancy and try to give meaning to stuff. Following the idea of starting with maximal simplicity we’ll start out by just trying to imitate cursive script using the rules of our familiar roman style alphabet (words go left to right, top to bottom without diacritical marks).

With these simple rules, we can initialize a grid from which to build our text. For now, I’m just going to draw a circle at each grid point (the examples here are using the p5.js library).

```
let n_lines = 20;
let n_dots = 30;
let y_padding = 5;
let x_padding = 5;
let radius = 3;
function setup() {
createCanvas(windowWidth, 400);
noStroke();
fill(0);
background(255);
smooth();
}
function draw() {
noLoop();
let padded_width = width-2*x_padding;
let padded_height = height-2*y_padding;
for (let i=0; i < n_lines; i++) {
y = y_padding + i*padded_height/(n_lines-1);
for (let j=0; j < n_dots; j++) {
x = x_padding + j*padded_width/(n_dots-1);
circle(x, y, radius);
}
}
}
```

Running that code gives us these dots.

Each line of dots are going to serve as anchor points for our drawing strokes. We can make this more obvious by connecting each line of points together. This can be done by adding the following bit of code to our draw loop. Essentially for each line we create a shape with vertices located at each one of our dots.

```
beginShape();
for (let j=0; j < n_dots; j++) {
x = x_padding + j*padded_width/(n_dots-1);
noStroke();
circle(x, y, radius);
stroke(0);
strokeWeight(2);
vertex(x,y);
}
endShape();
```

Right now our writing isn’t very exciting, so lets move the anchor points around. A good starting point is to displace each gridpoint by adding a random displacement in X and Y. For now we can use independent and identically distributed distributions to sample our X and Y values but we’ll see later on that you can get some interesting effects by playing around with different distributions. I also switched from simple vertices (connects each point by a line) to a curve vertex (I believe p5.js specifically uses catmull-rom splines).

```
beginShape();
for (let j=0; j < n_dots; j++) {
x = x_padding + j*padded_width/(n_dots-1);
let x_delta = random(-10, 10);
let y_delta = random(-10, 10);
noStroke();
circle(x + x_delta, y + y_delta, radius);
stroke(0);
strokeWeight(2);
curveVertex(x + x_delta, y+y_delta);
//for whatever reason, the start and end points of a curve need
//to be drawn twice, so we need this line
if (j==0||j==n_dots-1) {curveVertex(x + x_delta, y+y_delta);}
}
}
endShape();
```

Now we have lines that look…. somewhat like what we are going for. Next we need to play with the settings until we get a look that we’re happy with. We can change the number of dots per line, number of lines, magnitude of the displacement, coupling between x and y distributions, etc… I’ve added some sliders to the next bit of code so you can play around with it yourself. The coupling parameter is a bit simplistic, right now it just says that \(y=\Delta y (1-coupling) + \Delta x (coupling)\), so it just tends towards a line along \(y=x\) . I figured that was fine for this example, I just wanted to show that coupling was possible. In theory you could replace that with any function you can think of!

In reality, most text is not just a continuous line of scribbles that fills the entire page. Lines are generally broken up into words containing letters and spaces between them. This doesn’t have to be the case: one could easily imagine a writing system without separation, but adding the flexibility to be able to include spaces allows us to cover a much greater range of appearances.

Our current approach to generating script makes the problem of adding spaces somewhat more difficult than it would be if we had specific characters and words. With words we could just add spacing at the end of each word (and possibly introduce some extra spacing with a typesetting algorithm so everything lines up on both sides). We’ll get to that once we move towards generating alphabets. For now, we need to select both the number of spaces and their locations inside each line. Then we can break up our curve shapes into multiple shapes, each corresponding to one “word”, with spacing between them.

We’ll start with choosing the number of spaces since it is the easier of the two tasks: we select the characteristics of the distribution we want to sample from and then sample one number for each line we draw, rounding to the nearest integer. This integer tells us how many spaces will be in that line. For example, we can sample from the uniform distribution using the p5.js function random and use the built in JavaScript library math to round it to an integer value.

`let n_gaps_in_line = math.round(random(3, 7));`

Next is the more challenging part: *given n words, divide a line of length l into n random lengths*. The divisions have to be integer-valued because we are placing a specific number of dots in each word. This is a pretty interesting problem and I encourage you to think about it on your own for a while before I tell you the solution I came up with. Partially because it is a good exercise and partially because I’m hoping you come up with an improvement to what I have, haha.

For me, it made things easier to think of our line segment with n words as an n dimensional vector normalized according to the Manhattan Distance. Formally, the Manhattan distance is the Minkowski distance with \(p=1\), as follows \[ \left(\sum_{i=1}^n |x_i-y_i|^p\right)^{1/p} \] Less formally, it is called the Manhattan distance because it is how you would measure distance when walking around city blocks in downtown Manhattan. You want to go from point A to point B, but the buildings prevent you from taking any diagonals. So the distance is just the sum of the absolute difference between the two points in x and y. I’ve drawn two paths, but there are plenty other paths from A to B with equivalent Manhattan length.

What this means for our spacing is that we can think of the length of each word as having its own dimension in a vector. We can sample these lengths independently from any distribution we like, and then all we have to do is normalize the resulting vector so that the total Manhattan distance is equal to l. Here’s an example: let’s say we want a line segment with 4 words. We independently sample 4 random numbers between 0 and 1 to get \( 0.2, 0.17, 0.7, 0.91 \). This creates the 4 dimensional vector \[ [0.2, 0.17, 0.7, 0.91] \] with Manhattan length \[l_{Manhattan}=0.2+0.17+0.7+ 0.91 = 1.98. \] To normalize this vector according to its Manhattan length, we simply divide each element by \(1.98\) to get a new vector \[ [0.101, 0.086, 0.353, 0.460]. \] If our line has m dots (let’s say \(m=100\)), then we can multiply our unit vector by this length to get the number of dots per word. Our first word will take up \(10.1\) dots, the second will be \(8.6\), third will be \(35.3\), etc.. But there’s a problem with this… our numbers aren’t integers.

There are a few ways to solve this. The first is to round all the values to the nearest integer and choose the last word such that the total dots is 100. The issue with this is that all of the error accumulates onto the last word, so it might look noticeably different than the rest of the words in the section. Take a look at the header image for this post, you probably didn’t notice it before but the last “word” in each segment has a tendency to be significantly longer than the others. This effect is a bit more pronounced due to the specific algorithm I was using at the time, but it’s demonstrative of the type of errors you might expect using the simplified solution.

What we’re going to do instead is cumulatively track the remainder as we iteratively add our numbers together from the first word to the last. This might sound the same as our previous algorithm, but it ensures our maximum error on any single dot is less than \(1\) because we progressively correct for rounding errors as we go instead of all at the end. Running through this algorithm with our example above, we start with \(10.1\), the closest integer to this is \(10\) and the remainder is \(+0.1\). This word goes from segments \(0\) to \(9\) ( a length of 10). The next number is \(8.6\), we add the previous remainder of \(0.1\) to it and then round, getting \(9\) with remainder \(-0.3\), meaning this word goes from segments \(10\) to \(18\). Then we get \(35.3\), which when we add the remainder from the previous segments gives us a word that is \(35\) segments long, and goes from \(19\) to \(53\). The last word works out nicely at an even \(46\) segments and spans from \(54\) to \(99\). I’ve added an example showing this in action below (with the code for the algorithm below it). The vertical lines mark the start and end of each line segment. I tried color coding the lines, but anything more than 5 lines it is hard to see the difference. You can see that with our algorithm the lines always start or end exactly on one of the dots.

```
//draw n random segments
let values = new Array(n_lines+1);
//start at 0, pick random values
//determine total Manhattan length
let total = 0;
Array[0]=0;
for(let i=1; i<=n_lines;i++) {
Array[i] = random(0, 1);
total = total + Array[i];
}
// normalize lengths
for(let i=1; i<=n_lines;i++) {
Array[i] *= n_dots/total;
}
// (Important Part) addition with remainder
let rem = 0;
total = 0;
for(let i=0;i<=n_lines;i++) {
// get the updated remainder
let new_rem = (Array[i]+rem)%1;
// increment our array by the rounded total
Array[i] = total + round(Array[i]+rem);
total = Array[i];
// if we rounded up, subtract 1 from the remainder
rem = new_rem-round(new_rem);
}
```

We’re so close! We just need to add spacing between the lines now. I’ll give you a second to try and figure this out on your own.

.

No really… you should try it.

.

.

Ok fine. What we need to do is decide on an integer valued spacing, for example \(1\). Then, with \(n\) words we know we are going to need \(n-1\) spaces, each with a length of 1 making for a total of \(n-1\) segments dedicated for spaces. All we have to do is subtract this value from the total number of segments we use for our normalization. For example, instead of 100 in the above example, it would be 96. And then add this spacing value to the beginning of each word except the start of the line. This also increases the complexity slightly, since we need to keep track of the now different start and end locations of each line. Below I have update the example to incorporate the spaces between lines.

Now all we have to do is add this algorithm to our drawing script from before. Note that the algorithm doesn’t have any checks to make sure there are enough dots to place all the spaces, so things can get a bit weird, but I think this algorithm is a pretty good start for making generative script! I’ve placed the code for this example at the end of the document (it’s really long). Next time we’ll look at creating characters/alphabets to build our asemic writing from, and eventually get into generative hieroglyphs.

If you enjoyed this post, have any improvements for the code or have any questions please let me know in the comments!

```
let x_displace_slider, y_displace_slider, n_dots_slider, n_lines_slider,
coupling_slider, space_width_slider;
let n_lines = 10;
let n_dots = 70;
let y_padding = 20;
let y_start = 185;
let x_padding = 10;
let radius = 4;
function setup() {
createCanvas(windowWidth, 600);
// create sliders
x_displace_slider = createSlider(1, 100, 1);
y_displace_slider = createSlider(1, 100, 4);
n_dots_slider = createSlider(20, 200, n_dots);
n_lines_slider = createSlider(2, 100, n_lines);
max_spaces_slider = createSlider(3, 20, 5);
coupling_slider = createSlider(0, 100, 0);
space_width_slider = createSlider(1, 10, 2);
//position sliders
x_displace_slider.position(width / 2 - x_displace_slider.width/2, 20);
y_displace_slider.position(width / 2 - y_displace_slider.width/2, 50);
n_dots_slider.position(width / 2 - n_dots_slider.width/2, 70);
n_lines_slider.position(width / 2 - n_lines_slider.width/2, 90);
coupling_slider.position(width/2- coupling_slider.width/2, 110);
max_spaces_slider.position(width/2- max_spaces_slider.width/2, 130);
space_width_slider.position(width/2- space_width_slider.width/2, 150);
//trigger draw event when sliders change
x_displace_slider.input(redraw);
y_displace_slider.input(redraw);
n_dots_slider.input(redraw);
n_lines_slider.input(redraw);
coupling_slider.input(redraw);
max_spaces_slider.input(redraw);
space_width_slider.input(redraw);
strokeWeight(2);
}
function draw() {
smooth();
background(255);
fill(0);
//draw slider labels
text('x shift', x_displace_slider.x + x_displace_slider.width + 10, 35);
text('y shift', y_displace_slider.x + y_displace_slider.width + 10, 55);
text('num dots', n_dots_slider.x + n_dots_slider.width + 10, 75);
text('num lines', n_lines_slider.x + n_lines_slider.width + 10, 95);
text('coupling', coupling_slider.x + coupling_slider.width + 10, 115);
text('max spaces', max_spaces_slider.x + max_spaces_slider.width + 10, 135);
text('space width', space_width_slider.x + space_width_slider.width + 10, 155);
noFill();
//get values from slider
let x_displacement = x_displace_slider.value()/3;
let y_displacement = y_displace_slider.value()/3
let n_dots = n_dots_slider.value();
let n_lines = n_lines_slider.value();
let coupling = coupling_slider.value()/100;
let max_spaces = max_spaces_slider.value();
let space_width = space_width_slider.value();
noLoop();
let padded_width = width - 2 * x_padding;
let padded_height = height - 2 * y_padding - y_start;
for (let i = 0; i < n_lines; i++) {
y = y_start + y_padding + i * padded_height / (n_lines - 1);
// determine how many words to place
let num_words = floor(random(2, max_spaces));
//draw n random segments
let starts = new Array(num_words);
let ends = new Array(num_words);
let total = 0;
//start at 0
Array[0]=0;
for(let i=1; i<=num_words;i++) {
Array[i] = random(0, 1);
total = total + Array[i];
}
// normalize lengths
for(let i=1; i<=num_words;i++) {
Array[i] *= (n_dots-1-space_width*(num_words-1))/total;
}
let rem = 0;
total = 0;
starts[0]=0;
ends[num_words-1] = n_dots;
// addition with remainder
for(let i=0;i<=num_words;i++) {
// get the updated remainder
let new_rem = (Array[i]+rem)%1;
Array[i] = total + round(Array[i]+rem);
// only add spaces after the first word
if(i>0) {
ends[i-1] = Array[i];
total = Array[i]+space_width;
}
starts[i] = total;
// if we rounded up, subtract 1 from the remainder
rem = new_rem-round(new_rem);
}
let word_number = 0;
let skip_counter = 1;
// word always starts at beginning of section
beginShape();
for (let j = 0; j < n_dots; j++) {
//determine if this is the end of our word
if(j==ends[word_number]) {
word_number++;
skip_counter = 2+space_width;
}
x = x_padding + j * padded_width / (n_dots - 1);
let x_delta = random(-x_displacement, x_displacement);
let y_delta = (1-coupling)*random(-y_displacement, y_displacement)
y_delta -=coupling * x_delta*y_displacement/x_displacement;
if(skip_counter==2+space_width) {
// This is the end of our shape
curveVertex(x + x_delta, y + y_delta);
curveVertex(x + x_delta, y + y_delta);
endShape();
skip_counter--;
} else if (skip_counter<2*space_width & skip_counter>1) {
// this is the space
skip_counter--;
} else if (skip_counter==1) {
//this is the start of the next word
beginShape();
curveVertex(x + x_delta, y + y_delta);
curveVertex(x + x_delta, y + y_delta);
skip_counter --;
} else {
//this is inside of a word
curveVertex(x + x_delta, y + y_delta);
}
}
}
}
```

This is so cool! Have you thought of trying to incorporate the frequency of words of specific lengths into the algorithm? If you want to imitate language, using the probability of encountering an n-letter word to weight your space generation (for example, 3-letter words could occur 40% of the time but 8-letter words could occur 5% of the time) might make each squiggle more “word-like,” and each line more “sentence-like” 😮 could experiment with various languages. I know some languages really like compound words.