Planet CDOT (Telescope)

Thursday, May 26, 2022


Loran

Internal dialogue of a procrastinator

R: rational part I: Instant gratification part

R: you should really start working on this homework assignment, it will take you couple hours at most I: it is ok, we can do it tomorrow morning. Let's set up the alarm really early. I am so sleepy now after all, and tomorrow we will have some coffee. R: But if we wake up 2h before the deadline, it is too early for us! we are not going to have enough sleep! I: Okay, maybe we should start. But wait, let's first look at our news feed. It would not take too much time, but we will feel so much better. R: I don't know... Are you sure we are not going to get stuck again? I: Absolutely not! It is not that much to reed, we read most of it today already. R: Sounds reasonable. Okay. ... 20 minutes later.

R: I am closing this now! Let's open the slides. I: okay R: Why are we lying on the table eyes closed? I: I am so sleepy and tired. Maybe re-consider doing it in the morning? Look, if we go to sleep now, then waking up early, you still would get your 7-8 hours of sleep. R: You know how it will go. We would fall asleep at our usual time any way. Imagine how tired you will be in the morning! I: Not if we have coffee! R: Look, we should at least start tonight. I: okay. Also, I think we really need to make a blog post called "internal dialogue of a procrastinator". Look, it is very important that we do it now, because it is now or never. Remember how you always wanted to have a blog? R: Agreed, sounds reasonable. I guess if we have some time left after that, we could look at the homework. But I will set the alarm for tomorrow. ... 20 minutes later.

I: finishing the blog post! Told you it was going to be fast. R: Cool, we can skill do some homework before going to bed. I: sure. But your friend here needs you to watch this video. And you better do it, you can't tell them you are busy, they can see you not working on your homework. R: Okay. Remind me, why our priorities are like that? Why can't we be like normal people, who do important things first? Now I will have to punish you with feeling bad for not starting early. I: Oh no. What's the point of that? I thought we were friends? R: Sorry, it is in your.. no in our best interest. Next time you remember how bad it was, so you would have a bit more motivation to start earlier. I: If only this would have ever worked. I: At-least now we agree, that we are postponing everything towards morning.

by Loran at Thu May 26 2022 03:48:54 GMT+0000 (Coordinated Universal Time)

Sunday, May 22, 2022


Ray Gervais

Setting up Fedora 36 From Scratch

For the first time in years, I managed to break my Fedora installation to the point where I thought, “hey, why not install from scratch and start fresh?” So I did.

Evaluating Fedora Media Writer

Recently, I decided to change it up and also use the offical Fedora Media Writer, which replaced Rufus, Linux Live USB, and Balena Etcher as my Installation Media creator of choice for the past few years. Using the tool itself is smooth and polished as one could hope, but I did discover two gripes:

  1. When used against a USB Drive, it partitions the drive to the size required for the installation + 2GB (I may be wrong on the number) for live boot purposes. This means, if I provided it a 256GB USB for example, it makes resizing the drive to leverage the lost 200+GB a PITA due to it’s known behaviour which requires modifying the USB’s partitions from the live boot itself vs on your host machine which created it.
  2. I’m not sure if it’s based around user error, hardware failure, or a software bug, but there’s a good 50% chance that a USB formatted by Fedora Media Writer becomes unusable once ejected. I had done some research into this when I was testing a few weeks ago, and am looking into how to fix two separate USBs which are experiencing the same issue after using Fedora Media Writer on two separate machines. Working out how to resolve that issue as I type this. Anyways, that’s another blog post if I can figure it out.
    1. Update: I managed to restore the 256GB which was eaten by FMW simply by booting into it (I guess Fedora couldn’t read the UEFI boot sector?) and removing the fat32 boot sector. So, with that I booted into normal Fedora and reformatted the drive back to normal. Less annoyed now that I’ve fixed it, but still something to investigate for the other USB.

Installing Linux distributions has improved year over year, to the point where even Arch Linux received a newly polished graphical installer which I find humorous, but a fantastic step in making the distro accessible to more people. In my experience, installation of Fedora has always been a painless experience even when tied with manual configuration or partitioning. So, let’s skip the easy stuff and show what it looks like upon first boot after install with only a small change: the background wallpaper.

Stock Fedora Installation

Customized Fedora Installation

Setup & Configuration

The last time I setup Fedora was in 2019, and later in 2021 I took the current state and immortalized it using Ansible for a few-click setup of my developer environment. More can be read on that here, but this setup removes my previous decisions and automation so that I could evaluate and be intentional about every customization I make on the fresh installation.

Gnome Desktop

An interesting item that those who know me close will notice when looking at my setup: I didn’t find any reason to change the default desktop theme to a third-party theme (which, my Ansible has done since it’s implementation). At most, I switch from the modest light semantic to my preferred desktop theme: dark. Gnome 42’s dark theme pairs well with the few extensions I add:

  • Blur My Shell → To bring the few outstanding eye sores into a modern look
  • Clipboard → Because who doesn’t want a clipboard?!
  • Pop!OS Shell → I’m in love with Tiling Window Managers, but find them inaccessible to those who also want a normal desktop experience or live in a world which is aware of non-terminal applications. This extension fulfills that ask by providing robust tiling to Gnome.
  • GSConnect → At the start of the year, I switched to the Google Pixel 6 Pro after having the iPhone 11 Pro max since it’s release, and wanted to test out the KDE Connect + Android possibilities. So far, I’ve been very happy with being able to message, transfer photos and files, and even issue commands between my desktop and phone.

SHELL & CLI Tools

Since discovering fish I’ve yet to go back. The times where I miss having a POSIX compliant shell is quickly dismissed when remembered the load-times of a fully customized zsh environment which could mimic Fish’s out of the box experience.

Language Servers

CLI Replacements

Fonts & Icons

Though I change fonts and color schemes weekly it feels, I’ve attempted to implement a rule which reduces straying too far into the illusion of choice concept. Essentially, Any font I use must be found within the standard Fedora repositories (and installed via dnf), or the rpm fusion repositories which is enabled of course as well. The following fonts are what I install and move about as the mood hits me:

For icons, though the latest batch of adwaitia icons look great, there’s something about the unified look which dedicated icon packs provide that I can’t help but miss when not in use. My icon pack of choice is papirus

Adwaitia Icons

Papirus Icons

Developer Environment

Before my Fedora 36 install, I was messing with Doom Emacs to compare against my primary development environments: Visual Studio Code & Neovim. It was an interesting two-week experiment, and I got fluid enough with the tool that I even drafted and published my last blog post all from Emacs itself. Perhaps I’ll explore more this summer, but for now I’ve gone back to the battle tested setup until I get annoyed once more.

Neovim

Used primarily for small edits, quick note taking. In the past, I’ve tried to use Vim for more tasks and contexts, attempting to approach an emacs-like environment where from a single editor I could do everything. VIM doesn’t scale as well on that area, so I found myself regressing back to using it as a text editor alone, which led me to mess with the concept of Doom Emacs. After that experiment, I was content with what I had learned and the possibility of using it more in the future, but I found that I was still more productive in Visual Studio Code for when I need a dedicated development environment.

In the past, I had a 200+ vimrc file which configured the editor with a whole bunch of plugins and packages; setting color-schemes, enabling language server support for Go and Rust; formatting markdown tables to always look neat. I found NvChad in the past year, and have migrated to using it plus a far far smaller init.lua file to set the few items that it doesn’t set by default.

Visual Studio Code

No matter how many times I try to escape the embrace of Visual Studio Code, I always find myself returning. It’s one of the best IDEs for any language once configured, and more so is powered by incredible tooling underneath & extensions which tailor the experience. In my Ansible setup, I included at least 20 extensions which are installed during the playbooks run through, but overtime I’ve found myself not needing so many or preferring to keep my Visual Studio Code setup as lightweight as possible. Here’s the standard extensions which I have installed:

  • Better TOML → Better TOML Language support
  • Github Copilot → Your AI pair programmer
  • Go → Rich Go language support for Visual Studio Code
  • Nord → An arctic, north-bluish clean and elegant Visual Studio Code theme.
  • Paste JSON as Code (Refresh) → Copy JSON, paste as Go, Typescript, C#….
  • Rust → Rust for Visual Studio Code
  • Todo Tree → Show TODO, FIXME, etc. comment tags in a tree view

Closing Notes

An item that I’ve become increasingly aware of is my lack of interest in tweaking 24/7. Gone are the days full of testing & messing around, here are the days where my patience and focus demands stable & accessible. Sure, I may change the fonts & icons to fit my mood and interests for the day, but I don’t find myself constantly wanting to distrohop, try out other desktop-environments or mess with Nvidia drivers where It’s not needed. Part of this I can attribute to Gnome’s recent direction and releases, which compared to the earlier Gnome 3.X days, is a breath of fresh air and much needed.

by Ray Gervais at Sun May 22 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Monday, May 16, 2022


David Humphrey

HTTP Testing with Hurl in node.js

The JavaScript ecosystem has been benefiting lately from pieces of its dev tooling being (re)written in Rust.  Projects like swc, Parcel 2 and parcel-css, deno, dprint and others have brought us tremendous performance improvements with tasks like bundling, formatting, etc.  Recently, my favourite Rust-based, HTTP testing tool gained the ability to be run in node/npm projects, and I wanted to show you how it works.

Hurl is a command-line tool for running HTTP requests defined in simple text files (*.hurl).  I learned about it by chance on Twitter over a year ago, and have been using and teaching it to my programming students ever since.  The name comes from the fact that it builds on top of curl's HTTP code. The real benefit to Hurl is that it lets you write simple, declarative tests that read just like the HTTP requests and responses they model.  Oh, and it runs them ridiculously fast!

Here's an example test file that makes sure http://example.net/404.html returns a 404:

GET http://example.net/404.html

HTTP/1.0 404

You can get much fancier, by setting headers, cookies, auth, etc. on the request and assert things on the response, including using JSONPath, XPath, Regexes, and lots of other conveniences.  You can also capture data from the headers or body, and use these variables in subsequent chained requests. The docs are fantastic (including this tutorial), and go through all the various ways you can write your tests.

Here's a slightly more complex test, which uses a few of the techniques I've just mentioned:

# 1. Get the GitHub user info for @Orange-OpenSource
GET https://api.github.com/users/Orange-OpenSource

# 2. We expect to get back an HTTP/2 200 response. Also, assert
# various things about the Headers and JSON body. Finally
# capture the value of the `blog` property from the body into
# a variable, so we can use that in the next step.
HTTP/2 200
[Asserts]
header "access-control-allow-origin" == "*"
jsonpath "$.login" == "Orange-OpenSource"
jsonpath "$.public_repos" >= 286
jsonpath "$.folowers" isInteger
jsonpath "$.node_id" matches /^[A-Za-z0-9=]+$/
[Captures]
blog_url: jsonpath "$.blog" 

# 3. Get the blog URL we received earlier, GET it, and make
# sure it's an HTML page
GET {{blog_url}}

HTTP/2 200
[Asserts]
header "Content-Type" startsWith "text/html"

I've been using Hurl to write tests for node.js HTTP APIs, especially integration tests, and it's been a joy to use.  I still write unit tests in JavaScript-based testing frameworks, but one immediate benefit of adding Hurl is its speed, which helps shake out race conditions.  Many of my students are still learning asynchronous programming, and often forget to await Promise-based calls.  With JavaScript-based test runners, I've found that the test runs take long enough that the promises usually resolve in time (despite not being await'ed), and you often don't realize you have a bug.  However, when I have the students use Hurl, the tests run so fast that any async code path that is missing await becomes obvious: the tests pass in JS but start failing in Hurl.

I also found that Hurl is pretty easy to learn or teach.  My AWS Cloud students picked it up really quickly last term, and I think most node.js devs would have no trouble becoming productive with it in a short time.  Here's what one of my students wrote about getting started with Hurl in his blog:

"The learning curve is pretty simple (I managed to learn the basics in a couple of hours), less setup todo since it's just a plain file, the syntax is English friendly, besides the jsonPath that could take some times to adapt."

As I've been writing tests and teaching with Hurl over the past year, I've been pretty active filing issues. The devs are really friendly and open to suggestions, and the tool has gotten better and better with each new release.  Recently, I filed an issue to add support for running hurl via npm, and it was shipped a little over a week later!

Installing and Using Hurl with npm

Let me show you how to use Hurl in a node.js project.  Say you have a directory of *.hurl files, maybe inside ./test/integration.  First, install Hurl via npm:

$ npm install --save-dev @orangeopensource/hurl

This will download the appropriate Hurl binary for your OS/platform from the associated release, and create node_modules/.bin/hurl which you can call in your scripts within package.json.  For example:

"scripts": {
  "test:integration": "hurl --test --glob \"test/integration/**/*.hurl\""
}

Here I'm using the --test (i.e., run in test mode) and --glob (specify a pattern for input files) options, but there are many more that you can use.  NOTE: I'm not showing how to start a server before running these tests, since that's outside the scope of what Hurl does.  In my case, I typically run my integration tests against Docker containers, but you could do it lots of ways (e.g., use npm-run-all to start your server before running the tests).

In terms of Hurl's output, running the two tests I discussed above looks like this:

npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
error: Assert Failure
  --> test2.hurl:14:0
   |
14 | jsonpath "$.folowers" isInteger
   |   actual:   none
   |   expected: integer
   |

test2.hurl: FAILURE
test1.hurl: RUNNING [2/2]
error: Assert Http Version
  --> test1.hurl:3:6
   |
 3 | HTTP/1.0 404
   |      ^^^ actual value is <1.1>
   |

test1.hurl: FAILURE
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 0 (0.0%)
Failed:    2 (100.0%)
Duration:  174ms

As you can see, both tests are failing.  The error message format is more Rust-like than most JS devs will be used to, but it's quite friendly.  In test2.hurl, I've got a typo in $.folowers, and in test1.hurl, the response is returning HTTP/1.1 vs. HTTP/1.0.  A few quick fixes and the tests are now passing:

$ npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
test2.hurl: SUCCESS
test1.hurl: RUNNING [2/2]
test1.hurl: SUCCESS
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 2 (100.0%)
Failed:    0 (0.0%)
Duration:  169ms

Part of what's great about Hurl is that it isn't limited to a single language or runtime.  Despite the title of my post, Hurl isn't really a JS testing tool per se.  However, being able to "npm install" it and use it as part of your local or CI testing adds something new to your testing toolkit.  I still love, use, and teach tools like Jest, Playwright, and others, but I'm excited that JS devs now have an easy way to add Hurl to the mix.

Hopefully this will inspire you to try including Hurl in your node.js HTTP project testing.  I promise you that you'll write less test code and spend less time waiting to find out if everything works!

by David Humphrey at Mon May 16 2022 20:02:14 GMT+0000 (Coordinated Universal Time)

Thursday, May 12, 2022


Eugene Chung

What I learned from Project 1 of Udacity's Data Science with Python bootcamp

Introduction

As part of the project I completed successfully, I used SQL to explore a database related to movie rentals. SQL queries was ran and visualizations built to showcase the output in pdf. With my prior knowledge of SQL I learned at Seneca, I was able to complete this section in a day (3 weeks worth of work). I did this on my own time while I am on my work term with City of Toronto.

Page 1:

Page 2:


Page 3:


Page 4:



    For issues with plagerism, I won't show my code at this time.  However, I may post what I have for my next project which is a python bike sharing application.  The final project just has to do with Git and source control so I should be fine.  I was impressed to learn the different options in git log and my horizons expanded as I honed in on my skills with github and git.

Kind Regards,
Eugene Chung



by Eugene Chung at Thu May 12 2022 13:46:00 GMT+0000 (Coordinated Universal Time)

Monday, May 9, 2022


David Humphrey

Teaching AWS and Cloud Computing for Programmers

I'm beginning another term today, and the majority of what I'm teaching this time is Cloud Computing using AWS.  I spent a good part of 2021 researching and developing this course, and taught it for the first time during the winter semester.  Now that I've arrived at the "teach it again" stage, I wanted to reflect on it a bit.

We offer a number of courses on cloud computing already, but nothing tailored to developers.  As is so often my motivation, I wanted a course that provided a ready-made path for programmers to take, one which avoided the meandering, haphazard way that I had to learn it.

I decided to begin by asking friends in industry what they thought I should include and avoid.  I reached out to colleagues, former students, and friends working at big companies (FAANG), startups, and in government.  I spoke with people working in media, e-commerce, banking, the energy sector, and social media.  It was fascinating to hear the different perspectives they had, and where they agreed or disagreed.

"What should I teach a junior developer about the cloud?"

Here' some of what I heard:

  • "Everyone uses the cloud." Having cloud experience is really important for being able to go after good jobs in tech.
  • "The cloud is enormous.  You can't teach all of the cloud in a single course.  Your students are going to be overwhelmed".  Everyone is overwhelmed by it.  Focus on breadth over depth.
  • Focus on a single cloud.  Don't bother with multi-cloud
  • "The cloud is primarily Linux."  Make sure they know how to use it.  The cloud is glued together and automated with command-line scripts.
  • "The programming language you choose doesn't matter."  Use node, python, Go, whatever you want, they are all fine, but pick one you already know so you're not learning two things at once (our students know node.js the best, so I use that)
  • "Everything in source control. Period. Always."  Knowing git and GitHub is critical, and also that the entire lifecycle of software changes happens in git (proposal, implementation, testing, deploying).  Force students to work entirely in git/GitHub for everything.
  • "The cloud is cattle, not pets."  As quickly as possible, move them away from thinking about logging into machines to do manual tweaks, and instead think about code and automation
  • A lot of people said some version of "'It works on my computer' isn't useful," "Your code isn't useful if it isn't running in production," or "Cloud is what happens after you write your code."  Everyone said some version of "CI/CD pipelines are critical for a junior dev to understand."
  • "Most cloud workloads are run in containers." Almost everyone told me to focus on containers vs. manually using cloud instances, and to learn how to use them in dev, testing, CI/CD, and production.  "Docker and compose are good choices at this stage"
  • "Kubernetes is really important" and also "By no means should you teach Kubernetes in this course!" since it's too much ("even for industry").  Leave it for later in their journey
  • "Help them understand the cloud's secret sauce: managed services."  Learn how to leverage them in your applications vs. running your own.
  • Security becomes a central concern in the cloud.  Understand the principle of least privilege, the importance of the software supply chain, how to cope with dependencies, etc.  Learn to use tools to help manage the complexity.
  • Similarly, privacy matters more because all your code and data are literally in the cloud now.  Understand the importance of limiting the data you collect/store (what if there's a breach?), and why Personally Identifiable Information (PII) is suddenly a concern in things like log messages.
  • Make sure they know how to manage configuration and secrets properly
  • Use structured logging everywhere and log aggregation/observability tools to deal with things at scale
  • Because "everything is always failing" in the cloud, you have to write your software with different expectations
  • You have to understand the pricing structures of your choices and how to avoid a massive bill.  The paradox of the cloud is: "The cloud is cheap" but "The cloud is expensive."  You can fix things by throwing money at your problems, or you can understand and use better designs. Tagging helps you figure out costs later on.
  • Almost everyone I spoke to de-emphasized serverless, which surprised me--I thought it would be near the top of their list, but no one I spoke to thought it was critical to learn at first.  I've come to the conclusion that it should almost be its own course vs. something I do in this one (maybe it should be the next one I make)
  • Show them how to manage resources manually via the console, but also how to use Infrastructure as Code (IaC) to automate it
  • "Learn AWS" - most people agreed that AWS isn't the easiest option, but is the most valuable to learn.

Course Outline

Based on the feedback I got, I developed a course based on AWS that works through the following major topics:

  • Cloud Computing and AWS
  • Using the AWS Console, CLI, and SDK to manage AWS resources
  • Securing apps with Amazon Cognito User Pools, OAuth2
  • Configuring apps with Environment Variables and Secrets
  • Using git and GitHub to manage source code
  • Using GitHub Actions to create a Continuous Integration (CI) workflow that runs Static Analysis, Unit Testing, and Integration Testing
  • Using and Managing EC2 instances
  • Working with docker, authoring Dockerfiles, docker-compose, and Docker best practices
  • Working with public and private Docker registries to push, pull images (Docker Hub and Elastic Container Registry)
  • Using GitHub Actions to create a Continuous Delivery (CD) workflow (build and push images to registry, automatic deploys)
  • Deploying and running containers on AWS (manually and automatically as part of CD workflows)
  • Running containers in CI/CD for integration testing, and simulating AWS with docker-compose (localstack, dynamodb-local, etc)
  • S3 for object storage
  • DynamoDB for NoSQL
  • Infrastructure as Code and CloudFormation

Along the way, I have them build an HTTP REST API microservice, and slowly integrate more and more pieces of AWS, evolving their approach as they go.  Over 10 labs and 3 assignments, they get to work with nearly a dozen AWS services and maintain a single app for 14 weeks.

AWS Academy Learner Lab

When I started my course development, I decided to target AWS Educate.  It promptly disappeared a few months before I was set to do the first offering ("...everything failing all the time" right?).  I had to quickly pivot to Amazon's new offering, AWS Academy.

The majority of what's offered through AWS Academy is pre-canned, lab-based courses that can be delivered at any academic institution.  I'm not sure who the audience is, because I don't know too many professors who work this way (I always develop and create my own courses).  However, one of the "courses" is called the Learner Lab, and it lets students access AWS resources without a particular course pathway.

To use AWS Academy, an academic institution first has to become a member (luckily, my institution already was).  Then, you have to get "nominated" by an existing AWS Academy Member before you are allowed to create an Educator account.  After this, you have to work through a number of Educator Orientation and On-boarding modules (these took me 1/2 day).

Once you've jumped through the necessary hoops, you can start to create Classes and invite your students to create Student accounts.  You essentially get a Learning Management System on top of AWS.  I didn't use any of its features (we have our own LMS), but you could, and it seemed well made.

What's nice about the Learner Lab is that students don't need to create their own AWS Account and never need to enter a credit card (this is huge).  Upon creating their account, each student is given $100 credits to use during the course.  If they are enrolled in multiple courses, they get $100 per course (i.e., vs. per student).  Free tier spending doesn't get counted against this $100, so it goes pretty far.

A student's credits cannot be increased or renewed.  And, students being students, it's something to be aware of since any number of things can happen that might mean a student gets locked out of the lab before the course is over.  However, students being students, you also aren't going to wake up to a $10K bill in the middle of the term.  It's a trade-off, but I think it mostly works.

The Learner Lab is essentially a sandboxed AWS account.  You log in to AWS Academy and "Start" the lab environment.  Doing so activates a pre-made AWS Account, which runs for 4 hours before being shut down.  If you need to extend your time, click "Start" again and you get another 4 hours.  While the lab is running, you can use the AWS Console, or other AWS APIs like you normally would.  When the lab is stopped, services like EC2 instances are paused (they get restarted when the lab is restarted).  However, many services still keep working.  For example, S3 buckets, DynamoDB tables, even EC2 instances that are being managed by other services stay up (e.g., Elastic Beanstalk).  It's a little hard to say what is and isn't running when you stop the lab, and therefore what is and isn't costing you credits.

This simplicity is also one of the downsides. Since you have almost zero ability to drill into an account and figure out what is currently running or where your cost is coming from, you only know that you've spent "$23," and that's it.  I had one student come to me in a panic when he noticed he'd suddenly spent $70 in two days.  "What's causing this!?"  Great question!  All of the usual ways you'd figure this out in AWS are not accessible in the Learner Lab, so good luck tracking it down. Thankfully a professor can connect to a student's AWS workarea, and look around (also useful for evaluations, where you need to check how things are being used).

The Learner Lab account has access to ~50 AWS services in one of us-east-1 or us-west-2. This includes things like CloudFormation, Cloud9, CloudWatch, EC2, EBS, ELB, Lightsail, Rekognition, S3, SageMaker, RDS, SNS, SQS, etc. which covers a lot.  But it also leaves out some strange things, for example, no IAM, only 1 of the 17 ways run containers, no way to do API Gateway with Lambda, no Route53, etc.  If what you want to do is available, it generally works great, but some services have extra limitations.

For example, with EC2 you can only run Amazon Linux or Windows AMIs, and the largest instance type you get is r5.large (2 vCPU, 16 GIB RAM).  However, you can run up to 32 vCPUs in parallel, so you can run quite a few instances at once.

The setup works, but it's not perfectly aligned with how most CS departments think about using computing resources.  Most profs I know don't only give isolated labs.  You have project work that builds week to week, and the ability to work with long-lived resources over the term is important.  There was one point in the winter where all of the Learner Lab AWS resources got deleted (I mean for everyone, not just my students!).  The AWS Academy mailing list of other professors around the world came alive as all kinds of people talked about term work being lost and what a disruption it was.  It was pretty clear that people assume you can do term-based work in addition to discrete labs.

I think Amazon imagines a world where you use CloudFormation templates to work with stacks per lab.  That's one way to solve this, but you can't start learning AWS with CloudFormation, at least I don't know how you'd teach it that way.  Students need to work in the console manually for weeks or months before they can be expected to automate everything.

Another thing making this harder than it needs to be is the fact that that many third-party IaC or other automation tools are hard to use with the Learner Lab because your credentials get rotated every time you start/stop the lab environment.  Imagine you need to use AWS credentials in a CI/CD pipeline, but they change every time you do your work.  I found ways around it through careful ordering of topics, and adding non-AWS services into the mix, but it felt like an unnecessary limitation.  My requests to Amazon to fix it were met with, "We'll look into it."

Stats

The Learner Lab gives you some limited analytics.  Using these reports, I can see that the average spend per student during the winter was $8.55 (total, for the whole term), and the average lab time was ~120 hours.  Only one student hit $80 (he accidentally reserved a dedicated macOS instance for a few days without understanding what that meant), and another spent 336 hours in the lab.  Time in the lab doesn't cost more per se, but it means resources are running longer.  I think it's great to see people being curious and exploring.

The majority of what we did fit easily within the Free Tier. I was pretty nervous about how what I wanted to do would translate into per-student cost, since a professor can make recommendations (do this, please don't do that), but you never know what your students will do in reality.

I've learned that I could be more aggressive with what we spend and not run out of room.  Even with everything I did, I only managed to spend $20 (the Learner Lab includes a Test Student account, which professors can use to work like a student).  I'll see if this term's numbers match up with this conclusion, then slowly start turning up the volume.

Conclusion

Overall, I'm pleased with the whole thing.  The course prep and research was fascinating, but the development was overwhelming.  I wasn't sure what the students could and couldn't handle, but thankfully my first group proved that the idea will work.

I'm looking forward to updating the labs and projects in the coming terms to add different services, and expose the students to new corners of AWS.  I'm also looking at ways to add AWS and the Learner Lab to other courses I teach.  It's an obvious fit for my open source students, who need access to ephemeral development environments.  I'm not sure if it would be too complicated for my web students.  We'll see.

Wish me luck as I take a new (larger) cohort up the AWS mountain!

by David Humphrey at Mon May 09 2022 19:11:41 GMT+0000 (Coordinated Universal Time)

Sunday, May 1, 2022


Ray Gervais

Exploring Github Copilot During a Hackathon

Or, how to distract yourself with technology while trying to build new technology at breakneck speeds

Building a Golang-powered Back-end

I can remember it like it was just yesterday, I had a long weekend to myself planned to the brim with music and binging Lego Star Wars: The Skywalker Saga. It was the mental and youthful release that I was looking forward to after doing what felt like two weeks of continous overtime to meet a deadline. Then, out of nowhere while chatting with the now CSS-PRO @JamesInkster, the following (reenactment) exchange occurred:

James: I want to work on a project again Me: Bet. What if we did a hackathon this long weekend? James: Sounds like a plan.

And thus, my dreams of Lego and Music would wait a few more days as we dove into the unknown. Instead, we put together a semantic which would define the project and the requirement that similar to a well done Hackathon project, it would be fully working & presentable by the end of the weekend.

After much brainstorming, James came up with the idea of Down 4 Whatever, which was an application based around the concept of throwaway prompts that could be responded to within an hour of creation. It had potential, and followed the similar REST + Front-end paradigms that we knew where to go when starting. James wanted to focus on the CSS & front-end, which I was happy to avoid, so we decided I’d do the back-end which would serve the front-end similarly to the coming-back-into-style paradigm of server-side rendering. I chose Go for my back-end, and though it’s not a new technology for me, it allowed me to experiment with design patterns and concepts that I had not yet had a chance to do so.

It was after I had written the database models that I realized it wasn’t just me looking at the code. Upon me going to write the first bit of CRUD functions for the models, I noticed a familiar prompt for the following: func (db *Database) AddRoom(room *entities.Room) error {. GitHub Copilot was active! In a moment of both curiosity and laziness, I thought, let’s see what Copilot can do with my models if I were to ask it to write the CRUD functions. In a way, I could tell James would be annoyed because from a certain point of view, I was bypassing many of the coding requirements that I was responsible for. Still, that depends on your point of view, and mine was more interested in seeing what Copilot would suggest. To summarize, Copilot provided the following CRUD solutions (abridged to just the function headers) with very minimal human interaction:

// entities.Room CRUD
func (db *Database) AddRoom(room *entities.Room) error
func (db *Database) UpdateRoom(room *entities.Room)
func (db *Database) GetRoom(id string) *entities.Room
func (db *Database) RemoveRoom(id string)

// This one took me giving a long-winded comment prompt, but Copilot figured it out
func (db *Database) ListExpiredRooms() entities.RoomList

// entities.Prompt CRUD
func (db *Database) AddPrompt(id string, prompt *entities.Prompt) error
func (db *Database) GetPrompts(id string) entities.PromptList
func (db *Database) RemovePrompt(id string, promptId string)

// entities.Comment CRUD
func (db *Database) AddComment(id string, comment *entities.Comment) error
func (db *Database) GetComments(id string) entities.CommentList

For the full source code, you can find the repository at github.com/raygervais/dfw. What I found interesting is while implementing the CRUD for the sub-resources (Prompt, Comment), Copilot took into account always retrieving the room associated with the Id provided in the arguments. Each function which interacts with a single room object has the following at the start:

room := db.GetRoom(id)
if room == nil {
    return errors.New("Room not found")
}

I find it really interesting that the suggestions also took into account my design, which was to have no true database and instead store the items in memory for their lifetime.

Feeding the Copilot Ideas

At one point close to the end of our weekend-hackathon experiment, I wanted to depart from the current project and just mess with something else; the idea being the break would help realign and inspire when we returned to finish the project. James had not really seen or worked with GitHub Copilot before, so I thought it would be awesome to show him why it was so hyped during it’s beta release. Yet, we didn’t arrive to messing with it organically. Instead, we were doing leet-code styled questions purely to compare approaches to the solution. Upon providing my solution in JavaScript below (and not accounting for run-time / space complexity), I wondered if I could use it as a prompt for Copilot.

function rounds_won(scores) {
  let playerOneScore = scores.filter((score) => score === 1).length;
  let playerTwoScore = scores.filter((score) => score === 2).length;

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

So, given the algorithm above which determines from an array input (ex, [1,2,1,1]) which player won the more games, could Copilot recreate the function’s logic in various languages with no assistance aside from the language prompt? I stumbled upon this experiment purely by accident when I noticed below my function, copilot would suggest Language: {language} if I provided // {language} in the previous line:

// C#

So, I figured let’s see what it would do with the language prompts.

C#

// C#
// Language: C#
public int RoundsWon(int[] scores)
{
    int playerOneScore = scores.Count(score => score == 1);
    int playerTwoScore = scores.Count(score => score == 2);

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

I found it really intruging that unlike my JavaScript implementation, the C# version used dot.NET naming conventions such as RoundsWon. In similar fashion, below the function I was provided a prompt for another language. Here’s a sample of what copilot prompted and provided:

C++

// C++
// Language: C++
int rounds_won(vector<int> scores) {
  int playerOneScore = 0;
  int playerTwoScore = 0;

  for (int i = 0; i < scores.size(); i++) {
    if (scores[i] == 1) {
      playerOneScore++;
    } else if (scores[i] == 2) {
      playerTwoScore++;
    }
  }

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Quite interesting that instead of using a vector iterator by default, copilot suggested using for loops for languages where it’s more commonly found.

Python

# Python
# Language: Python
def rounds_won(scores):
    playerOneScore = scores.count(1)
    playerTwoScore = scores.count(2)

    return playerOneScore > playerTwoScore and playerOneScore or playerTwoScore

For Python, .count(x) was leveraged instead of a for or filter loop. Quite efficient!

Ruby

# Ruby
# Language: Ruby
def rounds_won(scores)
  playerOneScore = scores.count(1)
  playerTwoScore = scores.count(2)

  return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore

Almost indistinguishable from the Python version, but notice that it’s not a direct copy-paste? Here, Copilot has omitted the : and instead is using a ternary ?: statement.

Haskell

-- Haskell
-- Language: Haskell
rounds_won :: [Int] -> Int
rounds_won scores =
  let playerOneScore = length $ filter (== 1) scores
      playerTwoScore = length $ filter (== 2) scores
  in if playerOneScore > playerTwoScore then playerOneScore else playerTwoScore

It was around here that I started to clue in that Copilot is also aware of different language conventions such as the character(s) which define the starting of a comment.

Clojure

;; Clojure
;; Language: Clojure
(defn rounds-won [scores]
  (let [playerOneScore (count scores 1)
        playerTwoScore (count scores 2)]
    (if (> playerOneScore playerTwoScore)
      playerOneScore
      playerTwoScore)))

Swift

// Swift
// Language: Swift
func roundsWon(scores: [Int]) -> Int {
    let playerOneScore = scores.filter({ $0 == 1 }).count
    let playerTwoScore = scores.filter({ $0 == 2 }).count

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

Nice! It even got the quirks of Swift’s $0closure. Truthfully, this one wasn’t provided as a prompt till I provided // Swift at the start, then Copilot was in sync with my ideas.

Go

// Go
// Language: Go
func roundsWon(scores []int) int {
    var playerOneScore = 0
    var playerTwoScore = 0

    for i := 0; i < len(scores); i++ {
        if scores[i] == 1 {
            playerOneScore++
        } else if scores[i] == 2 {
            playerTwoScore++
        }
    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

Objective-C

I figured, let’s throw it a curve ball.

// Language: Objective C
- (int)roundsWon:(NSArray *)scores {
    NSInteger playerOneScore = [scores filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"self == 1"]].count;
    NSInteger playerTwoScore = [scores filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"self == 2"]].count;

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Mmmm yes. Clearly valid Objective-C with the amount of brackets found in a three line function.

Java

// Java
// Language: Java
public int roundsWon(int[] scores) {
    int playerOneScore = 0;
    int playerTwoScore = 0;

    for (int i = 0; i < scores.length; i++) {
        if (scores[i] == 1) {
            playerOneScore++;
        }

        if (scores[i] == 2) {
            playerTwoScore++;
        }

    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Another language where I’m surprised copilot went with the tried-and-true for loop vs iterators or other functional methods.

V

// V
// Language: V
func roundsWon(scores: [Int]) -> Int {
    var playerOneScore = 0
    var playerTwoScore = 0

    for score in scores {
        if score == 1 {
            playerOneScore++
        } else if score == 2 {
            playerTwoScore++
        }
    }

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore
}

PHP

// PHP
// Language: PHP
function roundsWon($scores) {
    $playerOneScore = count($scores, function($score) {
        return $score == 1;
    }

    $playerTwoScore = count($scores, function($score) {
        return $score == 2;
    }

    return $playerOneScore > $playerTwoScore ? $playerOneScore : $playerTwoScore;
}

If I had this 6+ years ago, I might have enjoyed my internship writing PHP everyday.

Rust

// Language: Rust using functional paradigms
fn rounds_won(scores: &[i32]) -> i32 {
    let playerOneScore = scores.iter().filter(|&score| score == 1).count();
    let playerTwoScore = scores.iter().filter(|&score| score == 2).count();

    return playerOneScore > playerTwoScore ? playerOneScore : playerTwoScore;
}

Closing Thoughts

Let me preface first that aside from my original solution, I did not test any of the prompts and solutions provided by Copilot. Instead, I used the overall solutions as a thought experiment to compare what the AI was suggesting for each language vs how I’d do it in each language.

  • The original implementation of the Rust solution leveraged for loops, but I found it really interesting that providing the prompt // rust solution using functional paradigmsprovided a vastly different solution which used the filter() function.
  • I’m absolutely dumbfounded to think of the amount of training data which went into Github Copilot to the point where it’s learned the idioms of specific languages (such as C#’s naming semantics vs the original solution, the use of := in the go solution, etc), but can also see where more training is needed given that each variable per-language still uses the same variable naming-scheme of my original solution.
  • Code reviews between junior and senior developers are going to become quite the gray area, because I can already tell how easily some will use the exact prompts provided by Copilot as their final solution. I understand the appeal, but I fear that it’ll incentivize a generation of developers who don’t understand the codebase they are writing.

Resources

by Ray Gervais at Sun May 01 2022 00:00:00 GMT+0000 (Coordinated Universal Time)

Wednesday, April 27, 2022


Joel Azwar

OSD700 Afterthoughts

Note: posted a little late but I wanted to wrap up the OSD700 conclusion

So we've finished the Winter 2022 semester and did I set myself up for a tough one. I have to admit it was underestimated my ability to keep up and a lot of things were piled up on top of school, it was a little hectic to say the least. Nevertheless, I still want to wrap up, and conclude my OSD700 journey.

Release 3.0.1 .... and .2, .3, and

so on and so forth. Safe to say we haven't 100% "successfully" shipped the release. But it's up and running okay on production.... at least it looks like it. Our Supabase migration didn't quite turn out as well as hoped, so we've got a major bug on prod right now and our feeds won't update with new posts. 

Tue really deserved credit for doing majority of the debugging and releasing, Duke also has been trying to come up with fixes for whatever's going wrong in prod. But right now I think everyone is trying to catch a break, the semester's just ended, everyone had a pretty heavy last 2 weeks and I'm sure some of us are slowly but surely monitoring telescope to eventually get this feature going right.

I hope that eventually we find that fix and get Supabase up and running on prod. That was one of Dave's last wishes coming up to the 3.0 release

OSD700

So we've concluded the OSD700 course and I want to wrap it up and share my thoughts on the whole journey.

Telescope

Telescope when I started working on it again at the beginning of this semester, was a far different web app then when I had worked on it 2 years ago in OSD600. It had all these new microservices, a new front end, satellite as well, it was overwhelming at first but eventually I grew familiar with Telescope and was able to contribute all the while learning new technologies at the same time.

It was great learning even more about Docker, especially considering the fact that contributing to Telescope 2 years ago in OSD600 was what taught me Docker in the first place. I had never worked with microservices before, so jumping into it in Telescope was such a great opportunity. Tools like pnpm were also really neat, since there's an insane amount of dependencies in our telescope monorepo. Then there were new technologies that the team have been implementing towards 3.0. There's Supabase of course, which is just perfect for this course so much since its Open Source!

It's always nice working on telescope because it's a guarantee you'll come out of it with learning something new. Something that was new to me this semester was also Sheriffing

Project Managing

Sheriffing was something I hadn't done before, taking lead in a team of really talented software developers and having a go at the wheel. At first you essentially keep everyone in check, know a little bit about everything, and help steer the direction to where the next release is heading. But nearing 3.0, Dave wanted us to prioritize different features than the previous releases, he emphasised on wanting new code, so the Sheriff's from that week onward had to help work Telescope towards that direction

It's funny seeing everyone take turns getting out of their comfort zone and see the different approaches in engaging with everyone and getting responses out of them. For Example, Alex would really take charge in her weeks as Sheriff. Although she might not have lots of knowledge on the different areas of Telescope, she tries to leverage her organizational skills to present the team with issues she's deduced as high priority. She's also been pushing the use of Github Projects, which is a kanban board. Meanwhile, someone like Jerry, who is a great programmer and has pretty decent knowledge on most areas of Telescope, makes it easier for him in triages to talk about different issues.

I enjoyed my weeks as sheriff, it helped me gain insight somewhat on what PMing is like, leading an hour and half long meeting (lots of talking), getting familiar with the different areas of the project, and really taking ownership of it. I've also got to hand it to my co-sheriffs, Kevin and Roxanne, they really carried the Sheriff responsibility in those weeks.

Blogging

I lacked in this area a whole lot and it was due to the fact that I wasn't motivated or had any inspiration to blog about the progress I had made towards telescope those weeks. And admittedly it's because I didn't spend enough time working on telescope to have anything of value to write about. My courses this semester were heavy and I had mismanaged my time. It's either that or maybe I was procastinating and making excuses. Regardless I still feel like I could've put a lot more effort working on telescope to contribute more significantly.

Despite that, I still want to say all I can about my favourite things about this course and not what I regret.

Conclusion

The too late didn't read version is OSD700 is an amazing course, and I'm sure most if not all my peers agree with this sentiment. Is it an easy course? Definitely not, but is it hard? That really depends on how much effort you put into it. There's no "assignments" or "marks" and it doesn't really feel like a class, it feels more like a team of software developers that need to come together and all bear responsibility in driving a project forward. It's a course that will give back as much as you work for it. But I think most importantly it gives you here a valuable opportunity to work on open source projects (Telescope in this case) with new technologies to learn and a chance to contribute to something. 

If I could redo this course I definitely would, I really enjoyed working with this semester's team of developers, everyone were really talented programmers and always seemed to know much more than me. But Telescope is an Open Source project after all, so I'm hoping I can just come back from time to time to see where it's at and have a chance to work on it again.

Lastly, shout outs to David Humphrey! An amazing professor and even greater mentor. The mastermind behind the course and the real Sheriff of Telescope. If you go to Seneca and have the opportunity to take a course of his (assuming it's related to computing) You definitely should and don't take it for granted :)

Thanks for reading

by Joel Azwar at Wed Apr 27 2022 14:49:00 GMT+0000 (Coordinated Universal Time)

Tuesday, April 26, 2022


James Inkster

Oh Hello, I’ve always hated CSS.

Hello Open-Source Community.

I haven’t been doing much recently in terms of open-source. I ran into a bit of a pickle. I’ve always disliked HTML and CSS in particular, I never truly took the time to understand. Flexbox, CSS Grids, or Column/Row layouts.

So all my sites would end up having some sort of weird formatting as I progressed, that I had no idea if it was the parent element or the child element creating the issue. I’d ask myself questions like “Am I building this to scale properly?”. I’d get extremely frustrated, and eventually want to ditch a project solely because I didn’t understand CSS and its mechanics.

This is pretty accurate for me.



After about 3 failed attempts at building a portfolio..where the components would always work individually, but when they were put together being completely broken, looking more like a bad rendition of Picasso. I knew I had to grind through the idea of learning layouts, coding strategies, and cool techniques that would help me organize the “code” of the CSS.

I ended up taking a Udemy course on it, far cheaper, and they usually loosely go over the principles, that give you a basic understanding and then show different implementations on it. One of the first things they go over is actually how to organize your CSS files. In particular, this course was using SCSS, so they implemented the use of the 7-in-1 style.

In particular, I find this interesting and neat, because once you understand the idea behind it it’s very easy and accessible to figure out where your CSS issue lies if there is an issue.

Okay cool, so this is a good file structure, and this allows you to figure out which file the CSS that you’re looking for resides in relatively easily. But..what about making it so your HTML classes are easily rememberable, easily located, and easily re-used if need be.
That’s where BEM comes into play. Block Element Modifier can easily let a developer know what part of the code you working on, where it resides, and what kind of element you are working on. This is fantastic for re-usability, quick changes, and even organizing the thought in your head to which code you need to change. I’m someone who has always struggled with being okay with trash code submitted. I want my code to be perfect. I want it to scale accordingly, and I like the idea of someone else coming in and being able to read it cleanly. (Keep in mind this is the opposite of my writing, I almost never re-read my work before submitting it) .

Shame on me.

However, code is different for me, it’s great reading a proper tertiary return statement on a function. Makes my little heart melt. This is why CSS I think has always been so hard for me, it’s weird writing functionless programming. I want to write neat little blocks of code and return something.

Okay, so what was the reason for the blog post? Well, My friend @RayGervais, and I decided to do our own little hack-a-thon last weekend. We set out with a small task, wanted to utilize different technologies and see what we could come up with quickly easily, and at a point where it’s in a “decent” working condition.

We threw around some ideas, and ultimately we decided on a quick application that could be used quickly and easily and is a throwaway. Ray wanted to use a Go back end, and I just legitimately wanted to test my newfound CSS skills.

We decided on a small project that could expand further. Currently, everywhere expects you to sign up, they get access to your data, and privacy and anonymity get thrown out the window. So picture yourself in front of 100 people, And you have a question you want to ask everyone and get their honest feedback. That’s why we created “Down 4 whatever“.

Essentially you can ask a question, you get a 4 letter code, and you’d just need your people to use that 4 letter code to see and answer your question. It runs off similar principles to Jackbox games. After x amount of time, the room will close, and you’ll be able to see all the responses from everyone. No accounts, no waiting, and quick and easy for everyone to use. Simplicity was our target here. You can check out the repo here.

https://github.com/raygervais/dfw

Ray took care of the backend by utilizing “Go” and GitHub co-pilot. Pretty impressive, and I’m sure he’ll write a blog post about that experience.

I took care of the CSS and the HTML. We didn’t feel we needed to utilize a newer framework as it would be overkill for this sort of project. What I did install for development purposes was live-server which mimics similar reloading during development when you update the CSS file or HTML file. A couple other dev dependencies to convert my SCSS to a CSS file, and I was good to go.

I tried my best to follow what I have learned, and one of the challenges I set for myself was to try and create a website that did not need separate CSS for a mobile version. This was a difficult task because it’s all about making sure any element I use can scale relatively well. There are a couple minor mistakes, particularly with input boxes ( i need to wrap them in a div still so that the input CSS will adjust accordingly where ever you use it on our site.)

For the most part, I was happy with what we achieved in under 12 hours. That includes brainstorming, coding, discussing possible options, and even how to merge our two sections clearly. There are some minor fixes that we need to work on is the result page, and there are some minor CSS bugs. But for such a short time, and my first time actually coding CSS with my newfound abilities. I think this is a win.



Also, please don’t just the colour gradient, there was a lot of debate, and nobody was happy no matter what we chose. I almost think I should make it random. Maybe gradient just is not the “in” thing at the moment.

Overall, check out the site, check out the repo. and if you are like me and hate CSS, learning those principles and structured coding made it that much more enjoyable. You’ll have something to build off of and not feel flustered when your CSS breaks everything.

by James Inkster at Tue Apr 26 2022 21:26:15 GMT+0000 (Coordinated Universal Time)

Sunday, April 24, 2022


Gerardo Enrique Arriaga Rendon

Goodbye, OSD700

And so the curtains fall on.

With this last post, I will officially say goodbye to OSD700, a course full of discoveries and learning.

Let's talk about OSD700

I have mentioned several times before, but I would like to mention it again so that the post is self-contained: OSD700 is a course option offered at the college that I am studying, Seneca College. Despite it being a course that counts toward a graduation, we almost never discussed about marks and how I will be graded. The professor in charge of the course stated one thing clearly, "as long as you are contributing and you are showing the effort, you will pass this. Do your best."

The course description does not do justice to the responsibilities that you are given. OSD700 is described as a course where you will have to maintain an open-source project, but there's an important asoect here that it is never mentioned: what does it mean to maintain an open-source project?

Open-source projects and its challenges

Some people may never be interested on maintaing an open-source project, due to the self-sacrifice required. Since it is an open-source project, you should not be expecting to get paid at all. Maybe you could be lucky and open a successful donation campaign, so that you can give full-time attnetion to the project. In other situations, however, the project is a side thing, since you have to focus on your full-time job, right?

In my case, I was able to focus on this course like a full-time job, since I had a light courseload. However, it doesn't matter if you are a full-time or part-time maintainers, the challenges are still the same, they just have to prioritized quite differently.

What are those challenges?

For starters, you have to prioritize what you want to focus at a given time. Assuming you are a full-time maintainer, the ideal is that you can give your best on all areas, but that's just an idealization. Most of the time, you have to give up more work than you thought.

Maybe you had an idea you wanted to implement, so you, all excited, start implementing a prototype. You notice that to realise it to your vision will take more time, so what do you do? If you believe in the cost-sunk fallacy, you might think that it is better to keep developing it until you reach that vision. However, all that time that you spent on developing that feature is time that could have gone to bug-fixing, paying off technical debt, finding bugs to file, or any other tasks that'd still improve the overall health of your repository. So, at the end, you swallow up your pride and say: "it is time to give up on this." It may sound somewhat defeatist, but I think acknowledging that other things have to prioritised is part of what it means to be an open-source maintainer.

Another challenge is the one of not knowing what your end goal. For a lot of people, not knowing where they would end up after embarking on an adventure can provoke anxiety; the uncertainty of it all always make you asking, "am I in the right path?"

However, instead of being scared of that adventure for the rest of your life, there are two opposite views on it:

  1. Find a way to set an end goal, and focus on that end goal until you reach it. When you reach it, try to set another end goal.
  2. Let the adventure take you wherever it may lead you, and just enjoy it.

The first one may work with people who already have experience on a particular set of problems and they would like to have something finished, while the second one is for the people who enjoy the journey more than the treasure at the end.

However in an open-source project, you may need both: you want to get things done, so that others find your project useful, but you also would like to explore and enjoy what you learn along the way, since that will help to stimulate your creativity and develop a new way of solving the problem (and it may help keep your mind from going insane out of boredom).

One more challenge that one may encounter is having to communicate your ideas with your fellow maintainers, if you are in team. The idea is that you are hopefully on a collaborative environment, where everybody is willing to listen to anybody. However, just willing to listen is not enough. You gotta communicate your ideas, even if you think they are bad or that they don't solve the problem. Why? Well, they help you grow as a developer. If your teammates can justify why a certain solution may not be suitable to a specific problem, then you can use to your advantage and learn from their way of thinking. Developers can create more robust code by listening to several situations and cases, so I think that developers can have more robust critical and logical thinking by listening to other ways of solving the problem.

A little bit of retrospective

Back when I started this semester, we were supposed to write a blog about the areas we would like to contribute the most, and being on charge of those areas, too.

In my post, I talked about documentation and dependencies. However, throughout the semester, I mainly focused on the dependency visualization project.

At the start, I had this cool idea of the dependency tree that you could navigate through to discovery all of the dependencies that Telescope uses, but this idea was just cool; in terms of functionality and usability, it was horrible.

After all, the main purpose of the dependency visualization was to make it easier to find GitHub links for other people, so finding an easy way to navigate through hundreds of dependencies was the most important choice. However, before the front-end arrived, we had to write a service that could provide the information that the front-end would need. The service is not that big, and it actually does very little on its own, so it was a manageable project.

Some opinions regarding the dependency-discovery service

If I have to be honest, I want to improve the dependency discovery service. I feel that the API can be improved and better defined. Also, the handling for exceptional cases is almost none, so there's that...

In terms of projects, this is probably the first project I was given total freedom on how to implement and how to design. I was just given a set of requirements, the rest was left for me to figure out, which was somewhat difficult to deal with.

Throughout my life I was always given a task to do, and they told how they want it to get it done, and I was able to follow just that. However, in the real world, most people that tell you what they want you to do are speaking from an area of expertise. Their solution might not be possible to realise, or their solution might not be one at all. This is why they tend to leave certain things vague: they just don't know what to do on a specific case or they might not know that that specific case actually exists. This is somewhat vexing for computer programs that could accept any kind of input, because essentially, you have what some might consider undefined behaviour.

I am aware that the phrase undefined behaviour has a technical meaning when discussing the safety of C programs, but I would like to take the phrase and view it more literally. When something that could happen in a program were to actually happen and you are not sure what the program may actually do, that is what I mean by undefined behaviour. It's the behaviour of a program that is never documented nor expected, and so it ends up being undefined. it is not like this behaviour does not exist, it's just that it is hidden, arising from the consequences of your program. This is where a lot of bugs could occur (in fact, all hidden bugs on a program are due to this phenomenon).

I hate that type of undefined behaviour. Why? Because I hate unreliable programs. If computers are fast, why can't they be correct, too? If I am going to type random words on my text editor, I don't want it to crash on me because I accidentally wrote typed too many keys at once. As the user, I don't know how the program behaves, so I am expecting that, as long as I don't do anything that is apparently unsafe for the program (turning off my entire computer during an update), I am fine with how the program does things. Of course, if the program can prevent any bad consequences even from those unlikely situations, even better, but that's not a strict requirement.

However, as a developer, when you are discovering what your program has to do, an important question always lingers in your head, "will I need this in the future?" Some people say yes, some people say no. Either way, the answer to this question cannot be boiled down to a simple yes or no, but instead it is reduced to the conclusion that the developer can make after years of experience, and even after that, that conclusion might turn out wrong.

In terms of my set of experiences, I cannot provide an answer yet.

What's left of the dependency-discovery?

A lot of stuff, actually.

First of all, we gotta improve how the service itself works. Maybe a way to improve memory usage, since we cannot store so much information at a time, even though we would like to to save on GitHub calls...

We could improve on the current API so that it is easier to use. For example, the /projects route does not provide pagination, so you will get all names at once, which can be annoying for interfaces implementing pagination on their end.

Another thing that could be done is to research what other functionality might useful for the service. This might not be necessary to do, since the service had a single purpose, but if this is an API that other clients could consume, maybe we could try to expand more on what could be possible with this service.

Final words

What's left to say? Probably a lot, but I don't want to make a 20 minute read, since that'd me just rambling on and on and on about certain topics related to my experiences on this.

I would like to end this post by thanking everybody who participated in the OSD700 course and gave their support to bring Telescope to version 3.0. Best wishes to everybody!

by Gerardo Enrique Arriaga Rendon at Sun Apr 24 2022 05:08:10 GMT+0000 (Coordinated Universal Time)

Saturday, April 23, 2022


Francesco Menghi

Telescope 3.0

Telescope 3.0 is finally out! For the final week of OSD700 I finished two issues that I mentioned in my previous blog post and some other small fixes.

I successfully added the YouTube and Twitch sign-up page and it now looks like this:

This PR was supposed to only add support for Twitch while Thanh was in charge of adding support for YouTube. However, after Dave's feedback, I ended up adding YouTube in the sign-up instructions and finally Thanh reviewed and approved my PR.

Many fixes

When something doesn't work and you try to find a solution, often the fix turns out to be something really simple. This is what happed with our Turborepo CI cache implementation.

The reason why the CI cache didn't work is because we were missing the team flag in the command. I previously assumed it was not required by looking at the Turborepo docs but Felix (the maintainer of the GitHub Action that we're using) came to the rescue and suggest the fix. In the end, this simple PR made everything work!

ESlint change

Our ESLint implementation included TIMING=1 to get a performance overview each time we lint our code. This however felt unnecessary most of the time so I decided to make it a separate command.

Going forward we have two ESLint commands that are run using Turborepo:

  • pnpm lint: runs ESLint in our repo.
  • pnpm lint-time: runs ESLint in our repo with the TIMING=1 option.

Docusaurus fix

Another quick fix I made was removing these two options from Docusaurus:

showLastUpdateAuthor: true,
showLastUpdateTime: true,

These two options allow you to use the git history to retrieve information on when a page was last updated and by whom. Unfortunately since our Docusaurus app lives inside the monorepo, .git is only found in the root of the monorepo. So, when building the Docusaurus app in a Docker container, there was no git info being copied over and it resulted in a long list of errors.

The end of OSD700

This was the last week of OSD700 and last week of my studies at Seneca. I cannot believe I already reached the end. It feels like yesterday that I was learning things like navigating around a terminal or learning the basics of html, css and javascript.

Both OSD600 and OSD700 were an incredible and unique experience. Last semester I learned how to use git and GitHub and made open source contributions during Hacktoberfest and beyond. This semester I participated into taking the Telescope project from version 2.0 to version 3.0 with an amazing group of people.

Working in open source feels completely different than getting an assignment done and submitting it before a deadline. When you open a Pull Request, your code is there in the open ready for other people to take apart, find potential problems and eventually merge. In my experience, there is often a peer with better experience with a certain tool or technology that can provide help or guidance when you are stuck.

Looking back at my first post of the semester where I wrote some goals for the class, I feel like I have accomplished what I setup to do. I worked a lot on monorepo tooling, I used Docker, NGinx, React, Docusaurus and more.

I am excited for the future of Telescope and how the next group of students will influence its direction. I plan to stick around and continue to make contributions to the project!

by Francesco Menghi at Sat Apr 23 2022 22:29:47 GMT+0000 (Coordinated Universal Time)


Diana Belokon

Getting a little bit technical!

After expressing my feelings about the OSD700 course, and all of the experiences that I went through this term, I thought, "what a good way to end the term..." But then I remembered that I have to speak about what I managed to get in for release 3.0!

So, I have one PR that took a while, despite being a single line change, and I would like to go into extreme detail, because it ends up in a really interesting lesson for other people to learn.

Flickering stars

So, back in release 2.9, I developed a simple "star field", which is a simple animation showing the GitHub contributors as profile pictures. I got the initial code and idea thanks to the Coding Train :D

One weird quirk that the star field had is that the profile pictures would "flicker". For some reason, a picture would get small, then appear super big, and then get small again, acting like normal. It is somewhat difficult to describe, but this would give a flickering effect that was very annoying and somewhat horrible to look at...

Adventuring into the solution

We did what a responsible developer would do: file an issue and leave it for later :)

After a while, the issue was picked up by @jerryhue. He mentioned something about rewriting the math logic so that it would fix the flickering issue.

I wasn't super sure what that would mean, because the logic itself was fine, it was just a weird flickering that was happening. I thought to myself, "how is that related to the flickering?"

After a loooooong while, the team decided to assign me the issue in a meeting. After the meeting ended, I asked @jerryhue why he couldn't solve it. He told me that it was difficult for him to rewrite the math logic since he was struggling to think of an easier approach to do the star field simulation.

Rewriting the solution or fixing the current one?

I wanted to know why he was so obssesed on rewriting the logic. He thought that the current logic was fine, but that it could be improved. One reason for this improvement is to make the stars a little more "believable", and the zoom effect would look better.

I understood his reasons, so I decided to improve the code to achieve something like that.

Of course, I didn't manage to do much...

I was back at square one. I was feeling frustrated. "Why I cannot solve this?", "why is this fix so difficult to happen?" I was almost going to give up and give it to someone else, however, I decided to try one last time.

If I was going to debug it properly, I wanted to write it in the p5js debugger, since I just wanted to focus on that specific part, and I didn't want to wait for Nextjs to compile every time I made a small change.

When I was trying it out, I didn't want to include the images at first, because I had to make an array of images urls that p5.js and I was feeling a little lazy, so I just went with an array of circles that will be drawn on the screen.

The only change I did was the circles being drawn instead of GitHub profile images, everything in the code was the same, and to my surprise, the flickering stopped.

I tried to understand the situation: how come the flickering occurs only with images and not with regular circles? At that moment, I decided to load a single image that could be reused for several stars and understand why this was the case. As expected, the flickering started to happen when I drew images.

I was utterly confused, since the behaviour of an image was clearly different of that from a circle. So, now that I knew that it would only happen with images only, I had to understand when that flicker would occur.

I wrote some code to make note of the values, I noticed something very consistent: the flickering wouldn't start at the end of a star reaching the maximum level, but right at the very start, when the size of star was 0.

When I thought about this, I was still confused, why at that moment? To answer some of my questions I went to the documentation to read upon the behaviour of the image function, the function that draws images on a canvas. Nothing much was mentioned when the width and height of an image was 0.

Since nothing was documented, I said to myself, "it wouldn't hurt to try in the live examples available", and so I did. I wrote the size of the image to be 0, and all my doubts started to disappear. If you try to write an image to a canva with dimensions 0, it would use the original dimensions of the loaded image!

So, that meant, if I wanted to avoid the flickering, I had to stop making the size to be zero, and instead to be something somewhat close enough. This line came out as a result:

this.z = p5.random(p5.width * 0.1, p5.width * 0.95);

Instead of getting a value that might be zero, I will instead a value that is never zero, and thus, get rid of the flickering once and for all!

Conclusion to this whole adventure

To be honest, this helped to learn a huge lesson. Sometimes it is harder to create a solution with the tools that you have rather than create your own tools to create your solution, however, at the same time, it is valuable to not have to create anything from scratch or rewrite everything. Even though it would have been nice to rewrite the math logic so that it looks better, I prefer this solution much better!

by Diana Belokon at Sat Apr 23 2022 17:21:35 GMT+0000 (Coordinated Universal Time)


Hung Nguyen

Final week

Overview

It has been a long run for my second journey to Open Source. It is time for the release 3.0 to land. I am happy that finally, we already had a mobile app with posts service, which is the main feature of Telescope. On top of that, student quotes and Telescope's about screen (which is all about Telescope) has been finished with an integration of Supabase. This contains a really hard word of James, Kevan and I. I would like to say thanks to all reviewers who gave us a lot of useful feedback.

Last PR for this semester

This PR is about styling post component for the post service timeline.

A header has been added to each post. Now we can see that they will contain the name of the author, and also the title is now more similar to the original post in Telescope web app. Not only that, if you click on the author's name, the app will redirect to author's blog timeline, where you can find other blogs of that author. Based on some feedback, I still have to fix the text font and padding for the page.

Final thought

This semester is great, I have more chance to contribute open source community, especially Telescope. Also, I have more chance to work with React Native which is really new to me. Although it is the end of the semester, the app still need to have some improvement, so if I have some times, I will definitely keep working on it.

by Hung Nguyen at Sat Apr 23 2022 17:11:37 GMT+0000 (Coordinated Universal Time)


Gerardo Enrique Arriaga Rendon

Before the finale...

Although I should have written this post a week ago, I'm glad I am writing it now, since I feel that I can properly divide the topics I would like to talk about in this and next blog post.

For the people that lack context: OSD700 is a course option given to students taking certain programs in Seneca College. As part of the course work, you are supposed to contribute to this open source project that the school gives support to: Telescope.

I would like to talk more about this, but I want to leave it for next post, as it would be the final one that I would have to write on Telescope directly. It does not mean that I will stop writing about Telescope, instead, I would stop writing about it in the context of the OSD700. I may approach different ways on how to talk about Telescope, and experiment a little bit more!

Either way, this post and the next one are the opposite sides of the same coin, that coin representing the finale of this "emotional" character development arc of mine (not really). For the last post, I would like to do a recap upon my adventure on this course, what I hope I had learned, what I managed to contribute to Telescope as a whole, and my aspirations going forward.

However, we gotta talk about what we are going to ship in Telescope 3.0, right? Since old habits die hard, we still are going to talk about the PRs that I managed to contribute for 3.0, as well as what else went into release 3.0. Also, it is not like this release is going to be the last one in Telescope, there's still plenty of work to be done!

So, what's up with the release?!

Well, this release was wild! It is kind of unfortunate that we couldn't have a calmed release for the final one (instead, the alpha release was much calmer...). There are a couple of problems that are going to be addressed throughout the weekend, because the team was starting to feel tired after a long meeting session where we prepared the remaining PRs for merging.

What did you manage to submit for 3.0.0?

Most of the PRs I did for this release were small, since I was taking a step back to focus on other courses that I had to pay attention to.

The most remarkable one would be moving the Star field element that @dbelokon worked on #3149 to docusaurus. This one was fairly straightforward to do, since I had to do something similar in the past (throwback to what I had to do related to WebAssembly!). I did not add any new code, but instead I adapted it to Docusaurus. I had to follow up with a few fixes, since the original PR was missing something that nobody noticed until it was time to build and deploy the docs.

You can also include the documentation for the beloved dependency-discovery service, that describes the API of the service in a more detailed manner.

And that's pretty much it. I did work on other PRs, but they were small fixes to stuff I had to fix so I could other tasks.

by Gerardo Enrique Arriaga Rendon at Sat Apr 23 2022 09:47:18 GMT+0000 (Coordinated Universal Time)

Telescope 3.0.0-alpha

This post should have been posted two weeks ago, but I didn't write it, so let's imagine that I wrote it two weeks ago :)

And yes, this post is a follow-up of the other I posted...

So, release 3.0.0-alpha has happened, and with it, we are nearing our final release: release 3.0.0.

I can't help but feel a little bit nostalgic when looking at the first release I worked on, release 2.5. Even though it has been only four months, I felt that much more has occurred.

Either way, what I managed to contribute for this release? Even though not as active as previous releases, I managed to finish the tasks that I described on my previous blog post.

Doing backups: not as difficult as it sounds

I'm glad we chose Postgres as a database. Not only because it's open-source, or that it's free, or because it is well-integrated with Supabase, but also because it has an easy-to-use client program to run backup creation and restoration.

I am not an expert on backups, so I don't know much about making backups on databases, but the experience I have when making backups
on my personal data, I always found it somewhat unreliable when I had to use a third-party program to create backups, since Windows builtins are not good enough. I am glad I was proven wrong when it came to dealing with the difficulty of it.

However, there was an important thing I had to take into account. All of our important services are deployed as Docker containers, so using localhost:5432 to refer to the database is not going to work. The original idea is to create a script and run it in the host computer that is running the containers. However, @humphd pointed out that that is not going to work, that we had to move the script into its own container that accessed the database container throught the docker network.

So, after reviewing how to write Dockerfiles, the next step I had to verify is to how run the script. The main idea is that the script is ran as a cron job at a specific time inside the container. I was lucky enough to find a blog post that explained just what I needed. I had to place the script into a folder of the container's file system so that it can be run at 2 o'clock in the morning.

That was run for creating backups, however. I also had to write a utility script that would restore the database using the backup generated by my script. Again, thanks to the wonderful client programs offered by the postgres team, this was a cakewalk.

The major difference between the restoration script and the script that creates the backups is that the restoration script does not have to run periodically, so I just included it inside the container. That way, a system administrator can connect to the container and run the script inside. With Portainer available, this task becomes fairly straightforward and accessible to do. If you want to check the PR, here it is to your delight.

dependency-discovery, are you tested?

So, after having a crash course on unit tests for the nth time, and some reading on the jest documentation, I wrote the tests for the /projects and the /github routes.

The main problem I have when writing unit tests is that I don't know how much is a "unit". Some websites that a unit can be a function, while others say that a unit is a class, while others say that an entire module is a unit! With so many different definitions, it is hard to choose a source of truth.

Instead of worrying on the actual definition of a unit test, I had to understand the reasoning behind it. Why does make a unit test different from an integration test or an end-to-end test? They tend to be small, so that they are faster to run at once. They tend to not have points of failure. They also tend to not directly depend on anything that could influence the result of the test, among other stuff.

So, in this case, I had to understand something regarding these tests. In these tests, we are testing just the routes and their responses, so that means we don't care about how the modules that the routes depend on do their work, we just care what they give us in return. We will assume that they work (although in some cases, they might not work), so that we can focus on the defects that our specific code has, instead of the whole system at a time. This brings the important concept of mocks.

When I started reading the jest documentation, they mentioned on how to use mocks and the like, but I failed to understand why would you want to mock your own code. Well, the lesson was, it does not matter if the dependencies that your code has are also another part of the project, they should be treated as a third-party library that will always work when the tests start running. This helped me on how to write the mocks that I needed for the unit tests, and thus helped me to write the tests themselves, too.

And with that, the work for release 3.0.0-alpha has been finalized. Now, onto the final step, release 3.0.0!

by Gerardo Enrique Arriaga Rendon at Sat Apr 23 2022 02:49:41 GMT+0000 (Coordinated Universal Time)


Roxanne Lee

Release 3.0 - OSD700 Recap

This blog is not really technical.

So, the release happened, and also not during a final, so I got to witness it. Its not perfect yet, but fixes will happen, and its not really what this blog is about.

Take OSD700

To get straight to the point. If you, dear reader, happened to stumble here cause you're still trying to decide if you should take this course or not. Short answer is "just take it".

Here's my metaphor of OSD700. You come in, you're thrust with a bunch of people, but no worries, you'll get to know them soon enough cause these are your co-parents for the next few months. "Telescope", or whichever project for your class, is now your new adopted child, imperfect, but beautiful in its own way. It's your job to take care of it, and your responsibility to make sure it stays alive and healthy. Obviously, as a kid, Telescope is very fragile and needs a lot of attention, and if you ever feed it anything bad, be sure to be prepared for all the tantrums that follows.

This course is like no other. All throughout the semester there is this sense of comradeship. You're all in this together, you don't have to worry about keeping your code away from one another, and its more about sharing your research and experiences with each other. This gets stuff done more efficiently, and you'll never have to feel like you're alone.

Fair warning though. A team can only carry you so far. You, yourself, will have to break out of your comfort zone and get passed that initial feeling of inconfidence and awkwardness of reaching out for help. And, even though there might always be some people who are seemingly on a whole other level, in the end, all that matters is how much you've learned and have grown.

My Change

At the beginning of the semester I was utterly clueless to what "Telescope" is, and immensely frightened. Before the grades came out I had thought I had flunked OSD600, and me coming back for more in 700 just felt like going for something way out of my league. Not to mention all the confusion and all these questions in my head. Like, what in the world was "Docker", or "Nginx", or "Redis", or "Traefik", or "Elasticsearch"? What about all these microservices and what do they do, and how do they even work with each other? And, whoa, there's also something called a "Satellite"?

But now after 14 weeks of experimenting and sorting out the threads, I've come out a whole new person. I know the insides and out of the Search service, made friends with Jest, and became intimate with Elasticsearch on a level I would never have imagined. Along the way I became familiar with the nuts and bolts of our back-end (Parser), and got comfortable in working with tools like Docker, Nginx, Redis, and Traefik. I even poked a bit in Satellite, and had a taste of working with the front-end. There was also that time when I was Sheriff... I might not have enjoyed it as much as others, but it was a different experience and it kept me on my toes. Not to mention the large amounts and variety of PRs that I had a chance to take a look at and learn from.

Undoubtedly, I was lucky to get to work with a huge group of talented people. You can read all their blogs on Telescope itself, or you can check them out selectively on the repo wiki. We also got a lot of support from alumni, and of course the Prof (humphd) himself. To this day I am still amazed how he can still keep on top of things with so much going on.

Conclusion

For me, working on Telescope is a roller coaster of emotions. Sometimes its fun, then the next second, tortuous. Finishing the code could be exhilarating, but not breaking through could be agonizing. Sometimes I'd look forward to the next issues to work on, but there are also times when I want to pretend the project doesn't exist. Undoubtedly, the scale definitely tips more towards the positive side. Plus, the amount of fluidity and uncertainty in open source projects like these is just another aspect to the uniqueness of this course.

I'll probably stick around especially for Elasticsearch related things, but I might try dip my toes in other areas without the pressure of trying to get things done for the next release.

All in all, its the best course that Seneca could ever offer, and I'm glad I stuck with it. Even you, dear reader, student or not, Telescope is always there with open arms. Consider contributing today!

by Roxanne Lee at Sat Apr 23 2022 07:02:29 GMT+0000 (Coordinated Universal Time)


Tue Nguyen

3.0 release, work is not yet done

I exclusively worked on this issue this week #3464.

Update Feed in parser to use Supabase:

I basically removed all invalid and flagged feed from src/api/parser/utils/storage and create functions that would take care of invalidating/flagging feeds using Supabase in src/api/parser/utils/supabase.js. Looking back, it wasn't that hard but it was quite lengthy.

The difficulty was that this function uses isFlagged function to check flagged feed and now it talks to Supabase instead of Redis which made unit test not pass without mocking it.

  addFeed: async (feed) => {
    // Check if feed being added already exists in flagged feeds set
    // If it is, do nothing
    if (await isFlagged(feed.id)) return;

    const key = createFeedKey(feed.id);
    await redis
      .multi()
      .hset(
        key,
        'id',
    // ...
  },

So I literally created a mock for Supabase database though I had a problem with telling Jest to use the mock module.

Originally, I imported the mock this way

jest.mock('../src/utils/supabase');
const { __setMockFeeds, __resetMockFeeds } = require('../src/__mocks__/utils/supabase');

So, __setMockFeeds would add the array to the actual module => not the mock factory that Jest uses. Those 2 are different javascript objects and Jerry helped me understand this.

// Correct way to mock
jest.mock('../src/utils/supabase');
const { __setMockFeeds, __resetMockFeeds } = require('../src/utils/supabase');

I also had to reworked the parser e2e test, now it looks even simpler. For some reason, I kept forgetting to do pnpm migrate and I was stuck on the test for a while. I managed to make it work a few hours before 3.0 release haha.

However, work is not yet done, Supabase doesn't really work in production, parser can't really talk to the database, I hope to fix this soon and earn myself some peace of mind.

by Tue Nguyen at Sat Apr 23 2022 04:15:41 GMT+0000 (Coordinated Universal Time)

Pre 3.0

The closer we approaching release 3.0, the more we try to narrow the scope of work and honestly I like how each of us knows what area of focus to take charge on and trim down the excess issues.

My tasks for this release again are mostly on parser service:

  • Remove the src/backend
  • Update invalid/flag feed functions to use Supabase
  • Update docs after removing src/backend

But let's focus on what I did the week prior to 3.0 release

Removing the src/backend:

One of the problem I came across was that we passed in our staging SUPABASE_URL and development SERVICE_ROLE_KEY to parser, the pair would never work and we'd never get staging SERVICE_ROLE_KEY as it is a secret. I had to change the pair so that the client could work locally, this created another challenge.
The reason we used staging url was Duke suggestion, he wanted to use the feeds from staging database instead of seeding feeds every time a developer spins up the services locally. Duke quickly wrote a solution to this in #3541 saving us from lengthy setup steps.

This wasn't an easy task, I had to do a lot of detective work, I had to find out which microservice or module that used the legacy backend, luckily there weren't much. Removing dependencies was mundane too, I had to figure out which dependency in was peer dependency in the root package.json.

Deploying the code to production made me a bit anxious, it did break production because I did not consider the nginx proxy config for legacy backend so nginx kept breaking. See those locations removed in #3550.

by Tue Nguyen at Sat Apr 23 2022 02:55:43 GMT+0000 (Coordinated Universal Time)


Kevan Yang

Working with Next.js + MUI

Intro

This week I worked on the front-end part of telescope. I didn't touch that much on the front-end in telescope, but I had experience with Next.js before. The Issue I will be working on is to implement a UI for the dependency-discovery services.

Planning / Implementation

They were no designs pre-planned for this issue. I had to make a simple, fast, and functional design.
Before getting into the design we have to understand the dependency-discovery API, we need to know all the possible route and what kind of data each route return to make use of all our API data as much as possible.

/projects

This route returns an array of string of the list of the dependencies we use in telescope repo.

Samples of data

[
  "@algolia/client-common",
  "@babel/helpers",
  "react",
]

/projects/:namespace/:name?

This route returns the general information of the dependency. It's an object with id as string, license as string, and 'gitRepository' as object. gitRepository has a type as string, url as string, directory as string, and issuesUrl as string.

Samples of data

{
  "id": "@babel/core",
  "license": "MIT",
  "gitRepository": {
    "type": "git",
    "url": "https://github.com/babel/babel",
    "directory": "packages/babel-core",
    "issuesUrl": "https://github.com/babel/babel/issues?q=is%3Aopen+is%3Aissue+label%3A%22hacktoberfest%22%2C%22good+first+issue%22%2C%22help+wanted%22"
  }
}

/github/:namespace/:name?

This route returns an array of issue label hacktoberfest, Help wanted, and good first issue of the dependency. Each object has a htmlUrl as string, title as string, body as string, and createdAt as string.

Samples of data

[
  {
    "htmlUrl": "https://github.com/babel/babel/issues/7357",
    "title": "injecting external-helpers in a node app",
    "body": "<!---\r\nThanks for filing an issue 😄 ! Before you submit, please read the following:\r\n\r\nSearch open/closed issues before submitting since someone might have asked the same thing before!\r\n\r\nIf you have a support request or question please submit them to one of this resources:\r\n\r\n* Slack Community: https://slack.babeljs.io/\r\n* StackOverflow: http://stackoverflow.com/questions/tagged/babeljs using the tag `babeljs`\r\n* Also have a look at the readme for more information on how to get support:\r\n  https://github.com/babel/babel/blob/master/README.md\r\n\r\nIssues on GitHub are only related to problems of Babel itself and we cannot answer \r\nsupport questions here.\r\n-->\r\n\r\nChoose one: is this a bug report or feature request? (docs?) bug report\r\n\r\nI'm trying to use a package that assumes that the external-helpers are available as a global. From the [docs](https://babeljs.io/docs/plugins/external-helpers/#injecting-the-external-helpers), I should be able to inject them to `global` in my node app by using `require(\"babel-core\").buildExternalHelpers();`. However, use of that still results in the following error: `ReferenceError: babelHelpers is not defined`\r\n\r\n### Babel/Babylon Configuration (.babelrc, package.json, cli command)\r\nSince the `buildExternalHelpers()` function needs to run before the package is imported and my app uses es module imports, I'm using a bootstrap file as an entry point that is ignored from transpilation and just tries to inject the helpers before loading the actual app:\r\n\r\n```

js\r\nrequire(\"babel-core\").buildExternalHelpers();\r\nconst app = require('./app');\r\n

```\r\n\r\n### Expected Behavior\r\n\r\n`babelHelpers` should be added to `global` so that it is available for the package that assumes it is available there.\r\n\r\nfrom the docs:\r\n> This injects the external helpers into `global`.\r\n\r\n### Current Behavior\r\n\r\n`babelHelpers` is not made available on `global`, resulting in `ReferenceError: babelHelpers is not defined`\r\n\r\n### Possible Solution\r\n\r\nThe docs also [mention](https://babeljs.io/docs/plugins/external-helpers/#getting-the-external-helpers) generating a helpers file with `./node_modules/.bin/babel-external-helpers [options] > helpers.js`. It wasn't obvious to me that this file could be imported to accomplish the same goal as `buildExternalHelpers()` until I started reading the source of that file. Importing that file instead does work for my app. I'll need this file elsewhere, so I'll likely just continue importing that instead, even if there is a way to use `buildExternalHelpers()`.\r\n\r\nWith that approach, my bootstrap file has the following contents instead:\r\n\r\n```

js\r\nrequire('../../vendor/babel-helpers');\r\nconst app = require('./app');\r\n

```\r\n\r\n### Your Environment\r\n<!--- Include as many relevant details about the environment you experienced the bug in -->\r\n\r\n| software         | version(s)\r\n| ---------------- | -------\r\n| Babel            | 6.26.0\r\n| node             | 8.9.4\r\n| npm              | 5.6.0\r\n| Operating System | macOS High Sierra \r\n\r\n### Forum\r\n\r\nWhile I was still trying to find a working solution, I was trying to find the best place to ask questions. The website still links to a discourse forum that no longer seems to exist. It'd be a good idea to either remove the link from the site or at least have it link to somewhere that describes the currently recommended approach for getting that type of help.\r\n",
    "createdAt": "2018-02-08T20:49:23Z"
  }
]

Now we know what the API return, let's makes a draft design. For simplicity, I'm gonna draw it by hand.

I started to look at what MUI component I could use for this.

After that, I need to plan how I will structure my code and what to add/modify.

1 - Create a route at /dependencies in src\web\app\src\pages in Next.js the name of the file is our route name.
The page should follow the other page in telescope which means it has an SEO, and Navbar component. Also, we need to add our DependenciesPage component which is our dependencies page.
I was thinking to use getStaticProps + revalidate features from Next.Js to make a static page. But since our API services need to be run at the time when Next.Js builds all static HTML, so we might need to modify our docker-compose to run our dependency-discovery services first then after our services it's up we can build our static HTML. For simplicity, I decided to just use useEffect to fetch the data.

import SEO from '../components/SEO';
import NavBar from '../components/NavBar';
import DependenciesPage from '../components/DependenciesPage';

const dependencies = () => {
  return (
    <div>
      <SEO pageTitle="Dependencies | Telescope" />
      <NavBar />
      <DependenciesPage />
    </div>
  );
};

export default dependencies;

2 - Add a new icon to redirect into our new route in the navbar src\web\app\src\components\NavBar\index.tsx

import { FiPackage } from 'react-icons/fi';

const iconProps: NavBarIconProps[] = [
 {
  ...
 },
 {
    href: '/dependencies',
    title: 'Dependencies',
    ariaLabel: 'Dependencies',
    Icon: FiPackage,
  },
]

3 - Set our dependencyDiscoveryUrl env.
In docker-compose docker\docker-compose.yml, we need forward DEPENDENCY_DISCOVERY_URL in the build args.

services:
  nginx:
    build:
      context: ../src/web
      dockerfile: Dockerfile
      cache_from:
        - docker.cdot.systems/nginx:buildcache
      # next.js needs build-time access to a number of API URL values, forward as ARGs
      args:
        ...
        - DEPENDENCY_DISCOVERY_URL

We also need to modify the Dockerfile in src\web\Dockerfile to add DEPENDENCY_DISCOVERY_URL as build args.

ARG DEPENDENCY_DISCOVERY_URL
ENV NEXT_PUBLIC_DEPENDENCY_DISCOVERY_URL ${DEPENDENCY_DISCOVERY_URL}

Now we need to forward that env to be accessible in Next.Js. We will need to modify src\web\app\next.config.js

const envVarsToForward = [
 ...,
 'DEPENDENCY_DISCOVERY_URL',
]

4 - Create our DependenciesPage components in src\web\app\src\components (Because it contains a lot of lines of code I'm just putting some parts. Read more)
Our DependenciesPage components should have a useEffect that runs once on onMount to fetch our dependencies at route /projects. As JSX, it will have our Page title, and a DependenciesTable component which is our table of dependencies, it has a dependencies(List of the dependencies) props.
We also need some style to make our page responsive and adjust color in light/dark mode.

import { dependencyDiscoveryUrl } from '../config';

import { makeStyles } from '@material-ui/core/styles';
import { useState } from 'react';

const useStyles = makeStyles((theme) => ({
  root: {
    backgroundColor: theme.palette.background.default,
    fontFamily: 'Spartan',
    padding: '1em 0 2em 0',
    paddingTop: 'env(safe-area-inset-top)',
    wordWrap: 'break-word',
    [theme.breakpoints.down(1024)]: {
      maxWidth: 'none',
    },
    '& h1': {
      color: theme.palette.text.secondary,
      fontSize: 24,
      transition: 'color 1s',
      marginTop: 0,
    },
    '& p, blockquote': {
      color: theme.palette.text.primary,
      fontSize: 16,
      margin: 0,
    },
  },
  container: {
    padding: '2vh 18vw',
    [theme.breakpoints.down(1024)]: {
      padding: '2vh 8vw',
      wordWrap: 'break-word',
    },
  },
}));
const DependenciesPage = () => {
   const [dependencies, setDependencies] = useState<string[]>();
   const classes = useStyles();

   useEffect(() => {
    (async () => {
      try {
        const fetchDependenciesData = await fetch(`${dependencyDiscoveryUrl}/projects`);
        setDependencies(await fetchDependenciesData.json());
      } catch (e) {
        console.error('Error Fetching Dependencies', { e });
      }
    })();
  });

  return (
    <div className={classes.root}>
      <div className={classes.container}>
        <h1>Dependencies</h1>
        <DependenciesTable dependencies={dependencies} />
      </div>
    </div>
  );
};

export default DependenciesPage;

5 - Create DependenciesTable component in src\web\app\src\components\DependenciesTable\index.tsx (Because it contains a lot of lines of code I'm just putting some parts. Read more). This would be the component that contains our table, search bar, and table navigation. We can get our dependency list from the props dependencies. Create a function to update the dependency list based on the search query.
Set the limit of the rows per page to 15.
We also need to add some style to match our drawing design and adjust color for light/dark mode.

type DependenciesTableProps = {
  dependencies: string[];
};

const DependenciesTable = ({ dependencies }: DependenciesTableProps) => {
  const classes = useStyles();
  const [page, setPage] = useState(0);
  const rowsPerPage = 15; // Set 15 element per page
  const [searchField, setSearchField] = useState('');

  // Compute dependencyList based on search query
  const dependencyList = useMemo(() => {
    setPage(0);
    if (!searchField) return dependencies;
    return dependencies.filter((dependency: string) => {
      return dependency.toLowerCase().includes(searchField.toLowerCase());
    });
  }, [dependencies, searchField]);

  return (
    <>
      <SearchInput text={searchField} setText={setSearchField} labelFor="Browse for a dependency" />

      <TableContainer>
        <Table sx={{ minWidth: 450 }} aria-label="custom pagination table">
          <TableBody>
            {dependencyList
              .slice(page * rowsPerPage, page * rowsPerPage + rowsPerPage)
              .map((dependency) => {
                return <Row key={dependency} dependency={dependency} />;
              })}
          </TableBody>
        </Table>
        <TablePagination
          className={classes.root}
          rowsPerPageOptions={[]}
          component="div"
          count={dependencyList.length}
          rowsPerPage={rowsPerPage}
          page={page}
          onPageChange={handleChangePage}
        />
      </TableContainer>
    </>
  );
};

export default DependenciesTable;

6 - Create Row component in src\web\app\src\components\DependenciesTable\Row.tsx (Read more). This content each of our row with the collapse. We have some useEffect waiting on a state called open (state for the collapsed component) to be changed to trigger the fetch for dependency information. For fetch GitHub issues, add more checks to see if API returns 403 which means API limit reached, if 403 we need to show a message saying Github API reached a limit, please use the link directly <Dependency name>.
We also need to add some style to match our drawing design and adjust color for light/dark mode.

7 - Testing part :

  • Making sure our design is responsive,
  • Show the right color in light/dark mode.
  • Working search bar.
  • Working pagination.
  • Working collapse and data fetch on collapse open.

Pull request reviews

Feedback from @humphd:

  • Use SWR instead of using useEffect for fetching
  • Add a paragraph to explain what is it after the title.
  • Add a spinner when content is loading.
  • Fix color, and font size issues.

Feedback from @joelazwar:

  • Reset item number to 1 when we search for something

Feedback from @DukeManh:

  • Use the Default SWR fetcher instead of creating one.
  • Rename some functions.

Final products

Once the code is merged the staging environment will ship this feature first.
On release 3.0.0, it will be shipped to production environment
Staging: https://dev.telescope.cdot.systems/dependencies/
Production: https://telescope.cdot.systems/dependencies/

by Kevan Yang at Sat Apr 23 2022 03:21:36 GMT+0000 (Coordinated Universal Time)


Alex Romanova

The final sheriff

Completed issues

There's a lot to go over. Let's start with things I specifically did.

Project description

Firstly, something I decided to take upon myself to complete, as time kept going and it kept staying incomplete. The Supabase project description. Basically, take the info from the existing docs and rewrite it in a certain format. Before we leave Telescope to the next people, I want to make sure all projects have proper descriptions so people don't get lost.

SSO service documentation

As a part of the larger REST API documentation issue, I decided to complete all things I can. Since the recording describing how SSO service basically works was available, all that was left - to write it down and format it. I also know that there were more things to it that I couldn't gather from the said recording. I have decided to add those as additional issues, while shipping at least the main idea of how it works.

There is also this nice comment that collects information on SSO stuff, you might want to check it out.

Status service documentation

Another part of the big REST API documentation issue. I was told Status was not difficult to figure out, so I took it as well. It turned out to actually not be that bad. The dashboard area seemed like it had more to it, but... it turned out to be pretty simple. I think there is a lot more we can do with it. Or, probably, not we, but the next batch of students to come.

Docusaurus home page

This was supposed to be a more interactive and cool visual, but in the end, there turned out to not be enough time. Making things look fancy isn't as important as making things make sense. I decided to put my resources into documentation, especially that REST API one that we had for so long. Besides, I did get stuck on trying to make the animation I want work.

Perhaps, I should file an issue to improve my SVGs with animation. I also thought about how easy it was for me to create such simplistic static art. It wouldn't have to be a 16:9 scenery, but perhaps some smaller elements or parts that can be later placed here and there, and maybe animated in the future. I just want to give people toys they can then play with and put all around the website. I'm sure they will find a nice place for them.

I also decided to leave some basic level issues fixing some of the formatting. Since the next active semester to work on this would be hacktoberfest students, I know they will need some good first issues.

Merging the two About us versions

Now, this one I want to note, I didn't actually do. I realized it wasn't as simple as I first thought, outlined the steps to be done, and then asked for someone else with more experience with React to take it on. And someone did!

Issues created

As mentioned, I have created some issues myself, mostly simple ones for people to get used to the process. Here are they:

Managing

I was one of the sheriffs of the last week. The most fun time! Well, not as fun as somewhere in the middle, where I would also get to communicate about people's progress and make priority lists... But still! I know it's a very important week and I know it is a tough one. Things need to be under control and things need to be done right. That's why I signed myself up for this - I like and can be on top of things. I'm happy I did.

I did a lot of maintenance steps this week, especially towards the end of it.

Meetings

We only had the one meeting this week. It wasn't an 8 am one! I set a time for 2pm on Wednesday for out last triage. Seemed like a good all around time. Many people were missing, but we also had a lot of engagement from those who did attend. This was the outline for the meeting:

  • Have PRs done by today!
  • Tomorrow at 2pm release, might shift to later if needed

  • Regular triage

  • Priorities

  • People's last input

  • Anything you want to add

We started with a simple going-over-issues-PRs, asking what's up with those... But this time, a little differently. This time it was important to ditch things that aren't important/too time-consuming. We had people that were free to pick up what's important and we also had people that had too much on their plate. It wasn't a simple "will you get it done?" anymore, but managing people's workload and dividing some tasks into smaller versions of the same tasks. We ditched a bunch of things. We rearranged tasks to other people. I, myself, changed my focus from visual docusaurus stuff to completing REST API docs. Tue had a lot to do, but somehow he was the only one who could. Well, I think he still had some help, but he definitely had the most on his plate for this week. Gold star, if we got one.

I have then introduced the idea of leaving information behind for the next students to come. The simplest solution I had for this was to just include some info in the blog post. Everyone has to write the blog post, so might as well talk about things I want to know. Better though - to get in a call with me, so I can ask about details.

What did I want people to tell me? Basically, an overview of the area of Telescope they've been working on. I need to compile a complete overview of Telescope, and I am not familiar with all of its areas. Therefore, I need to collect that information.

Secondly, I want to pass on people's advice. I will definitely share mine, but I'm sure I'm not the only one. Especially, when it's about some other technology someone specializes in.

In the end, we talked about our experience with this course. I announced my gratitude for numerous times I required help with git. People enjoyed their Telescope experience. I know I definitely did.

Setting deadlines

For releases, especially for the final week, it is extremely important to have that line drawn. It has to be simple for people to follow, since during finals nobody has time to read all the updates. I used Teams calendar to set those meetings way ahead of time, since it shows up right away for people. I think my timing for the triage was good, 2pm Wednesday - chill, decent time.

The release, however, wasn't that smooth. I first set it to Thursday 2pm, but as the time came closer to it, it became clear we can't land many things we wanted, specifically the most important ones. My instinct was to give people more time, to make sure there would be nothing else going on. Saturday I thought. However, that idea wasn't popular. People were insistent on Friday, so there it was - Firday 4 pm - 3.0 release. I did most of PR reviews and my own PRs at night between Thursday and Friday. I haven't slept yet, it is currently 7pm of Friday. We are still doing the release. Things are breaking.

Reviews

I have done the best I can with reviews, however, there were many that I couldn't test. I'm not able to do docker things with my version of windows, so I had to ping other people to do it instead. I did review the boring docs though! There were also many technologies I wasn't qualified to review because of my lack of skills.

Labels

We had a lot of PRs. We needed to have reviews ready, so to make it the simplest experience for people, I kept assigning difficulty and area labels onto PRs. I also assigned some to issues, however, I haven't gone through all of them yet. I do plan to.

Annoying people

Especially towards the end I really kept pinging people for updates on things. Some PRs had no instructions. Some were drafts, but were still approved. Many were just not clear. So, those people had to be pinged. At some points, there would be no response, and a decision had to be made based on the unclear information we had.

Release

The group began reviewing and collaborating a few hours before the release meeting time. They really made those PRs just in time, so we proceeded to work on the release with everyone ready and on it. As soon as we started - things started breaking. There was a thing about ngnix, a thing about a conflict of PRs, some files were not properly added, something something staging... The release wasn't formed properly because it produced errors, so you can see THERE IS NO 3.0.0 IN RELEASES!. Something something unit tests. Then SSO was broken. Then... uuh.. The database broke. The final problem was...

Parser is broken! No new posts will be shown on it x_x. So, for new posts you can look at staging. Unfortunate. I have been up since yesterday, others that were in the meeting were pretty exhausted as well. We decided to go back at it tomorrow with fresh brains. Not that I had any clue why things didn't work...

Cleanup

As this is the end of the semester, I want to leave the repository in its best state I can. I also decided to stay around after it's officially over, at least for the near future. I will have a co-op next semester, so I don't know how much time I will have left for Telescope. Either way, I have some things planned to contribute to the general improvement of quality and organization.

Sidebar

Sidebar updates! I haven't always been updating it, I'm sorry. But now I have, with some new things on there. At the very least it needs to be updated at the very end of the semester. It can be left untouched for some time since then.

Priorities

I have done this before as a sheriff. I went through all the issues and sorted them based on how important they are. I want to do a similar thing before I leave, as we have abandoned some unfinished business that was quite important.

Another big part of this process is to figure out what the issues are even about. There is a category for this specifically, named "talk". Means, this issue needs to be discussed, as it is unclear. Why is this important? Because if it's been there for ages and is still unclear to us, it will likely be left untouched by future students as well. Once we do figure out what an unknown thing is about, we can either close it, or improve its issue description. Maybe leave some comments explaining what we found out. So that people of the future don't waste time doing the exact same thing and have a better idea of what's going on.

How do I actually plan to do it? I just want to get into a call with a bunch of people, where we would go over the issues one by one. It would have to be after the semester is over, so perhaps we won't get many people attending. However, hopefully, David will be there, as he is the one mostly creating issues, or at least the one aware of them. I have added all issues to this project board so far, so a starting point has been set.

Themed releases

There has been a proposal this semester, my idea to do "themed releases". The idea was to make weekly meetings and tasks a little more fun and meaningful, as well as add some more structure to our process. I mean, you can just read the issue itself.

As we have experimented with this concept this semester, it brought some results. I don't want to lose this information and approach, so I wanted to make it into some sort of a static guide/advice kind of thing. So, I made it into a wiki page.

I am also unsure why we have some things on wiki pages, some in README's and some on docusaurus. Perhaps we need a better, more structured system for this. Oh, hopefully, I won't forget to address it!

How to be a Project keeper

I have made a draft on this document a long time ago. Now that there's not much time left, I need to make sure it's done. I did some final changes, and now it's complete.

Being a Project keeper is a concept I thought I try to make stick. It's basically what I have been doing, or wanted to do within Telescope. And now that I might have less time on my hands, I don't want these things to get lost and abandoned. I want my work to live on. There are many things that I think just make the developer experience better and should be done. So, I have created this concept to be passed on and, hopefully, picked up by someone next year.

Hacktoberfest issues

Something I know has to be done eventually, by someone, is to prepare Telescope for hacktoberfest. Firstly, labels need to be assigned. "good first issue", or just generally having appropriate area labels to things. Secondly, the issues themselves need to be decent. Clear, understandable by noobs, doable.

As I am planning to do that priority list, as I will be going over every issue out there, I will be as well assigning appropriate labels and improving the issue description. It should work great to improve developer experience for hacktoberfest.

Videos planned

As my tribute to the next generation, I want to leave behind information. And I know .md files are alright, blog posts are okay, but videos - oh, videos are just better. You put them on x2 speed, you get to relax and follow... It's much more common for me to try and read a page and realize I haven't even processed what I just read - it's all blank as compared to watching a video and not remembering anything about it. Sure, you can still blank out on a video, but it's more engaging than text. Either way, I know I can make videos, so I will. They won't be like those tutorial videos though. I can edit things and make them extremely time effective, but I don't want to. There's a reason I do streams and not youtube videos.

Advice

I am not yet sure if I want this to be a single video, or separate ones. I don't want to make it too polished and time-consuming to make, instead, rather a free-form presentation kind of a stream that is divided into parts. Either way, I'll see how it goes.

Being a sheriff

I want to share my advice on being a sheriff. Holding meetings, planning, keeping track of things, communicating. There's a lot to it. You can do a bare minimum, which I will describe how to. Or, you could do all these other things extra, that I want to really share. Being a sheriff was my best experience of this class, so I really have things to tell.

Using projects

I was the one to enforce using projects this semester, and I think they should be used later on as well. They really stop working when nobody maintains them, but projects can really improve your experience. I want to show basic functionality, my own practices and uses for projects, such as: starting a new Project; using it to keep track of a milestone progress; compiling a list of priorities; sorting issues by different types... As the who used projects the most this semester, I think I will be the person to guide others on it.

Project keeper role

This will be somewhat of a video form for the document I made already. It will be a mix of personal experience, advice, demonstration of concrete things and explaining the structure of projects. Hopefully, it inspires someone else to continue on doing this role.

Students to come

Status of Telescope

We are now leaving Telescope behind in this current state. We need to be able to describe what exactly is "this state". What areas are there? What technologies are used? What is the progress on those areas? Where even is the code for this? How do you run it? Where did we leave things off? Basically, all you need to know about Telescope in its current state as a developer that wants to contribute.

Issues seller

This is a special category that I might continue as a series. The idea is to compile issues by a specific type, and present them in an appealing and intriguing manner. So that people actually get excited and want to work on them.

What types would be there?

  • Compilation of "good first issue"'s, specifically for hacktoberfest;
  • Issues that can be figured out and handled by a new open source developer. Also for hacktoberfest, but not the very simple ones. Just to show things I for sure know a student can do, which really brings confidence;
  • Important issues to focus on. As in, what is a priority for Telescope? What is that single bug that ruins the whole thing every time?;
  • Entry points to new technologies. If you are curious about docker, but you have never dealt with it before, what would be a good issue to start with?
  • Completely new directions. Things that have not been researched and attempted yet. For example, the Slack bot issue. Or the email notification issue. It's interesting, it's unexplored and it will be completely owned by you!

Experience of previous developers

I am planning to not only compile information from current developers on the current state of Telescope. I also want to record and transmit their advice, their wishes, their views on what Telescope could be, what is lacking, what they want to see completed in the future.

Those are the ideas I have had so far.
I want to have meetings after this is technically officially over, but open source is never over. I've grown attached to Telescope, I want it to do well. My videos will hopefully help others.

Future blog posts?

I have enjoyed blogging, honestly. I might continue doing it, maybe about Telescope, maybe not. After all, it doesn't have to be about Telescope, or open source, or even coding at all. Not many will see it. But maybe that is why I want to write things sometimes. So that only some people get to lay their eyes on it.

GG WP

by Alex Romanova at Sat Apr 23 2022 00:47:04 GMT+0000 (Coordinated Universal Time)

Friday, April 22, 2022


Gus McCallum

Adding SVE2 Support to an Open Source Library - Part III

This is the third part in a series on adding SVE2 optimization to an open source library - you can reads part 1 here and part 2 here.

In my last post I ran into some snags at the end when building opus, apparently some of the intrinsics I wrote for the file I modified errored out and as such I wasn't able to build and test the library. In this post, I'm going to change tactics and try autovectorization to see if I can successfully build and test the library, after which I'll give some analysis on the results.

First off I'll start by clearing my work so far and downloading a fresh copy of the library. At this point I need to configure and build, but in order to prevent the NEON intrinsics from conflicting with the autovectorization I'm going to implement I'll need to turn off NEON support in the configure.ac file. I searched for mentions of intrinsics and turned them off, and then ran autogen.sh and configure to get the build configured. We can confirm intrinsics are now turned off by the output:

------------------------------------------------------------------------
  opus 1.3.1-107-gccaaffa9-dirty:  Automatic configuration OK.

    Compiler support:

    C99 var arrays: ................ yes
    C99 lrintf: .................... yes
    Use alloca: .................... no (using var arrays)

    General configuration:

    Floating point support: ........ yes
    Fast float approximations: ..... no
    Fixed point debugging: ......... no
    Inline Assembly Optimizations: . No inline ASM for your platform, please send patches
    External Assembly Optimizations:  
    Intrinsics Optimizations: ...... no
    Run-time CPU detection: ........ no
    Custom modes: .................. no
    Assertion checking: ............ no
    Hardening: ..................... yes
    Fuzzing: ....................... no
    Check ASM: ..................... no

    API documentation: ............. yes
    Extra programs: ................ yes
------------------------------------------------------------------------

Now by subbing the CFLAGS mentioned in the last post (-O3 -march=armv8-a+sve2) into the makefile and taking care to run the build with the qemu-aarch64 argument, we can see that the build and most of the tests execute successfully.

FAIL: celt/tests/test_unit_cwrs32
./test-driver: line 107: 448983 Illegal instruction     (core dumped) "$@" > $log_file 2>&1
FAIL: celt/tests/test_unit_dft
PASS: celt/tests/test_unit_entropy
PASS: celt/tests/test_unit_laplace
PASS: celt/tests/test_unit_mathops
./test-driver: line 107: 449031 Illegal instruction     (core dumped) "$@" > $log_file 2>&1
FAIL: celt/tests/test_unit_mdct
./test-driver: line 107: 449046 Illegal instruction     (core dumped) "$@" > $log_file 2>&1
FAIL: celt/tests/test_unit_rotation
PASS: celt/tests/test_unit_types
./test-driver: line 107: 449072 Illegal instruction     (core dumped) "$@" > $log_file 2>&1
FAIL: silk/tests/test_unit_LPC_inv_pred_gain
PASS: tests/test_opus_api
PASS: tests/test_opus_decode
PASS: tests/test_opus_encode
PASS: tests/test_opus_padding
./test-driver: line 107: 449716 Illegal instruction     (core dumped) "$@" > $log_file 2>&1
FAIL: tests/test_opus_projection
======================================================
   opus 1.3.1-107-gccaaffa9-dirty: ./test-suite.log
======================================================

# TOTAL: 14
# PASS:  8
# SKIP:  0
# XFAIL: 0
# FAIL:  6
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: celt/tests/test_unit_cwrs32
=================================

FAIL celt/tests/test_unit_cwrs32 (exit status: 132)

FAIL: celt/tests/test_unit_dft
==============================

FAIL celt/tests/test_unit_dft (exit status: 132)

FAIL: celt/tests/test_unit_mdct
===============================

FAIL celt/tests/test_unit_mdct (exit status: 132)

FAIL: celt/tests/test_unit_rotation
===================================

FAIL celt/tests/test_unit_rotation (exit status: 132)

FAIL: silk/tests/test_unit_LPC_inv_pred_gain
============================================

FAIL silk/tests/test_unit_LPC_inv_pred_gain (exit status: 132)

FAIL: tests/test_opus_projection
================================

FAIL tests/test_opus_projection (exit status: 132)

============================================================================
Testsuite summary for opus 1.3.1-107-gccaaffa9-dirty
============================================================================
# TOTAL: 14
# PASS:  8
# SKIP:  0
# XFAIL: 0
# FAIL:  6
# XPASS: 0
# ERROR: 0
============================================================================

Let's take a closer look at one of the tests that successfully made use of the SVE2 inclusion:

Running Opus Encode Test

./test_opus_encode
Testing libopus 1.3.1-107-gccaaffa9-dirty encoder. Random seed: 3135156945 (95E3)
Running simple tests for bugs that have been fixed previously
  Encode+Decode tests.
    Mode    LP FB encode  VBR,  11318 bps OK.
    Mode    LP FB encode  VBR,  14930 bps OK.
    Mode    LP FB encode  VBR,  67659 bps OK.
    Mode Hybrid FB encode  VBR,  17712 bps OK.
    Mode Hybrid FB encode  VBR,  51200 bps OK.
    Mode Hybrid FB encode  VBR,  80954 bps OK.
    Mode Hybrid FB encode  VBR, 127480 bps OK.
    Mode   MDCT FB encode  VBR, 752629 bps OK.
    Mode   MDCT FB encode  VBR,  25609 bps OK.
    Mode   MDCT FB encode  VBR,  33107 bps OK.
    Mode   MDCT FB encode  VBR,  78592 bps OK.
    Mode   MDCT FB encode  VBR,  73157 bps OK.
    Mode   MDCT FB encode  VBR, 137477 bps OK.
    Mode    LP FB encode CVBR,  11480 bps OK.
    Mode    LP FB encode CVBR,  21257 bps OK.
    Mode    LP FB encode CVBR,  63201 bps OK.
    Mode Hybrid FB encode CVBR,  25583 bps OK.
    Mode Hybrid FB encode CVBR,  36126 bps OK.
    Mode Hybrid FB encode CVBR,  54107 bps OK.
    Mode Hybrid FB encode CVBR, 108482 bps OK.
    Mode   MDCT FB encode CVBR, 934758 bps OK.
    Mode   MDCT FB encode CVBR,  25111 bps OK.
    Mode   MDCT FB encode CVBR,  33929 bps OK.
    Mode   MDCT FB encode CVBR,  52270 bps OK.
    Mode   MDCT FB encode CVBR,  79059 bps OK.
    Mode   MDCT FB encode CVBR, 117366 bps OK.
    Mode    LP FB encode  CBR,   7432 bps OK.
    Mode    LP FB encode  CBR,  16781 bps OK.
    Mode    LP FB encode  CBR,  90950 bps OK.
    Mode Hybrid FB encode  CBR,  18257 bps OK.
    Mode Hybrid FB encode  CBR,  37925 bps OK.
    Mode Hybrid FB encode  CBR,  56473 bps OK.
    Mode Hybrid FB encode  CBR,  78233 bps OK.
    Mode   MDCT FB encode  CBR, 780220 bps OK.
    Mode   MDCT FB encode  CBR,  20668 bps OK.
    Mode   MDCT FB encode  CBR,  38398 bps OK.
    Mode   MDCT FB encode  CBR,  74376 bps OK.
    Mode   MDCT FB encode  CBR,  68468 bps OK.
    Mode   MDCT FB encode  CBR, 141108 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,   4884 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  18110 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  44628 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  15245 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  26620 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  61885 bps OK.
    Mode    LP NB dual-mono MS encode  VBR,  86977 bps OK.
    Mode    LP NB dual-mono MS encode  VBR, 119885 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,   7123 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  19106 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  41453 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  10135 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  19040 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  57693 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR,  77731 bps OK.
    Mode   MDCT NB dual-mono MS encode  VBR, 165272 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,   7245 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  16460 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  56065 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  13411 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  28783 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  61638 bps OK.
    Mode    LP NB dual-mono MS encode CVBR,  92219 bps OK.
    Mode    LP NB dual-mono MS encode CVBR, 110936 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,   4047 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  21622 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  43253 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  12557 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  28091 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  57473 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR,  77203 bps OK.
    Mode   MDCT NB dual-mono MS encode CVBR, 154714 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,   4000 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  12396 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  56699 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  10327 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  19576 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  36651 bps OK.
    Mode    LP NB dual-mono MS encode  CBR,  50625 bps OK.
    Mode    LP NB dual-mono MS encode  CBR, 122376 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,   4916 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  14647 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  55741 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  12307 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  23408 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  62311 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR,  54876 bps OK.
    Mode   MDCT NB dual-mono MS encode  CBR, 104358 bps OK.
    All framesize pairs switching encode, 9810 frames OK.
Running fuzz_encoder_settings with 5 encoder(s) and 40 setting change(s) each.
Tests completed successfully.

Now we can inspect the encoding program and see how it makes use of SVE2 instructions.

find . -type f -executable -print | while read X ; do echo ======== $X ; objdump -d $X | grep whilelo ;

The lines in question are too numerous to put here but the files affected are:

======== ./tests/test_opus_projection
======== ./tests/.libs/test_opus_encode
======== ./tests/.libs/test_opus_api
======== ./tests/.libs/test_opus_decode
======== ./celt/tests/test_unit_entropy
======== ./celt/tests/test_unit_cwrs32
======== ./celt/tests/test_unit_mathops
======== ./celt/tests/test_unit_rotation
======== ./celt/tests/test_unit_dft
======== ./celt/tests/test_unit_mdct
======== ./.libs/opus_demo
======== ./.libs/libopus.so.0.8.0
======== ./.libs/trivial_example
======== ./opus_compare
======== ./silk/tests/test_unit_LPC_inv_pred_gain

And a line count with find . -type f -executable -print | while read X ; do echo ======== $X ; objdump -d $X 2> /dev/null | grep whilelo ; done | wc -l returns 2903 instances of whilelo. I'll zero in on one of these files to see how it makes use of its SVE2 instructions.

Analyzing Opus Encode Test

I'll go back to the encode test I ran before and take a look at how it's using its SVE2 instructions now.

objdump -d test_opus_encode > ~/opus_encode_objdump

In searching around the output I can find 6 instances of whilelo at play here, the first 2 being in this <generate_music> section.

00000000004016b0 <generate_music>:
  4016b0:       d2800002        mov     x2, #0x0                        // #0
  4016b4:       d282d003        mov     x3, #0x1680                     // #5760
  4016b8:       2538c000        mov     z0.b, #0
  4016bc:       25631fe0        whilelo p0.h, xzr, x3
  4016c0:       e4a24000        st1h    {z0.h}, p0, [x0, x2, lsl #1]
  4016c4:       0470e3e2        inch    x2
  4016c8:       25631c40        whilelo p0.h, x2, x3
  4016cc:       54ffffa1        b.ne    4016c0 <generate_music+0x10>  // b.any
  4016d0:       712d003f        cmp     w1, #0xb40
  4016d4:       54000e4d        b.le    40189c <generate_music+0x1ec>
  4016d8:       a9bb7bfd        stp     x29, x30, [sp, #-80]!
  4016dc:       f000017e        adrp    x30, 430000 <memcpy@GLIBC_2.17>
  4016e0:       910593de        add     x30, x30, #0x164
  4016e4:       910003fd        mov     x29, sp
  4016e8:       a90153f3        stp     x19, x20, [sp, #16]
  4016ec:       d285a002        mov     x2, #0x2d00                     // #11520
  4016f0:       52955571        mov     w17, #0xaaab                    // #43691
  4016f4:       294093d4        ldp     w20, w4, [x30, #4]
  4016f8:       52955550        mov     w16, #0xaaaa                    // #43690
  4016fc:       8b020002        add     x2, x0, x2
  401700:       52800006        mov     w6, #0x0                        // #0

So let's break down what it's doing here. Whilelo is a loop that's taking scalable predicate register p0.h as its first argument (the destination register), and increments until the second argument - the value in register xzr is lower than the value in register x3.

  4016bc:       25631fe0        whilelo p0.h, xzr, x3

While that condition is true, the program performs a st1h, or a contiguous store halfwords from vector, with a scalar index as its argument.

 4016c0:    e4a24000        st1h    {z0.h}, p0, [x0, x2, lsl #1]

It then increments x2.

  4016c4:       0470e3e2        inch    x2

While this helps us understand the mechanics of what's being called and why, what function does this serve in the program? The source code can give us some clues in a language that's easier to parse:

   /* Generate input data */
   inbuf = (opus_int16*)malloc(sizeof(*inbuf)*SSAMPLES);
   generate_music(inbuf, SSAMPLES/2);

We can see here that generate_music is a function that, much like the vol_createsample function in lab 5 creates dummy data to operate on and test the encoding utility. Looking at the function definition in full:

void generate_music(short *buf, opus_int32 len)
{
   opus_int32 a1,b1,a2,b2;
   opus_int32 c1,c2,d1,d2;
   opus_int32 i,j;
   a1=b1=a2=b2=0;
   c1=c2=d1=d2=0;
   j=0;
   /*60ms silence*/
   for(i=0;i<2880;i++)buf[i*2]=buf[i*2+1]=0;
   for(i=2880;i<len;i++)
   {
    opus_uint32 r;
    opus_int32 v1,v2;
    v1=v2=(((j*((j>>12)^((j>>10|j>>12)&26&j>>7)))&128)+128)<<15;
    r=fast_rand();v1+=r&65535;v1-=r>>16;
    r=fast_rand();v2+=r&65535;v2-=r>>16;
    b1=v1-a1+((b1*61+32)>>6);a1=v1;
    b2=v2-a2+((b2*61+32)>>6);a2=v2;
    c1=(30*(c1+b1+d1)+32)>>6;d1=b1;
    c2=(30*(c2+b2+d2)+32)>>6;d2=b2;
    v1=(c1+128)>>8;
    v2=(c2+128)>>8;
    buf[i*2]=v1>32767?32767:(v1<-32768?-32768:v1);
    buf[i*2+1]=v2>32767?32767:(v2<-32768?-32768:v2);
    if(i%6==0)j++;
   }
}

We can see that the entire function is essentially two loops, so it makes sense that we would be able to take advantage of whilelo to squeeze some more performance out of it. Using SIMD in this way allows multiple iterations of the generate_music function to run simultaneously, which should speed up the performance greatly.

With that in mind, it would be interesting to see if there are loops in the source code that didn't get converted to SVE2 instructions and ascertain why. One such example is in main, which I'll show the first part of for context:

int main(int _argc, char **_argv)
{
   int args=1;
   char * strtol_str=NULL;
   const char * oversion;
   const char * env_seed;
   int env_used;
   int num_encoders_to_fuzz=5;
   int num_setting_changes=40;

   env_used=0;
   env_seed=getenv("SEED");
   if(_argc>1)
    iseed=strtol(_argv[1], &strtol_str, 10);  /* the first input argument might be the seed */
   if(strtol_str!=NULL && strtol_str[0]=='\0')   /* iseed is a valid number */
    args++;
   else if(env_seed) {
    iseed=atoi(env_seed);
    env_used=1;
   }
   else iseed=(opus_uint32)time(NULL)^(((opus_uint32)getpid()&65535)<<16);
   Rw=Rz=iseed;

while(args<_argc)
   {
    if(strcmp(_argv[args], "-fuzz")==0 && _argc==(args+3)) {
        num_encoders_to_fuzz=strtol(_argv[args+1], &strtol_str, 10);
        if(strtol_str[0]!='\0' || num_encoders_to_fuzz<=0) {
            print_usage(_argv);
            return EXIT_FAILURE;
        }
        num_setting_changes=strtol(_argv[args+2], &strtol_str, 10);
        if(strtol_str[0]!='\0' || num_setting_changes<=0) {
            print_usage(_argv);
            return EXIT_FAILURE;
        }
        args+=3;
    }
    else {
        print_usage(_argv);
        return EXIT_FAILURE;
    }
   }

The while loop here iterates through the command line arguments argc, and the logic within checks for the validity of the arguments. The correct way to call the encoding test is in the format /test_opus_encode [<seed>] [-fuzz <num_encoders> <num_settings_per_encoder>]. Disassembled, the first loop section looks like this:

  4012f4:       97ffff7f        bl      4010f0 <strcmp@plt>
  4012f8:       350001e0        cbnz    w0, 401334 <main+0x134>
  4012fc:       11000e73        add     w19, w19, #0x3
  401300:       6b14027f        cmp     w19, w20
  401304:       54000181        b.ne    401334 <main+0x134>  // b.any

We can tell from the reference to <strcmp@plt> that this is where the loop's first condition is evaluated, with the string comparison between the current command line argument and "-fuzz" taking place. So why isn't this loop vectorized? Let's break it down.

while(args<_argc)
   {

args is initialized to 1. The while loop executes as long as args is less than argc (argc is the number of command line argument provided when invoking the program).

    if(strcmp(_argv[args], "-fuzz")==0 && _argc==(args+3)) {

The first condition evaluated is if the argument is the string "-fuzz".

        num_encoders_to_fuzz=strtol(_argv[args+1], &strtol_str, 10);

If it is and the number of arguments is 4, the number of encoders to fuzz is set with the next argument and execution moves to evaluation of the next condition.

        if(strtol_str[0]!='\0' || num_encoders_to_fuzz<=0) {

If strtol_str[0] (the character following a number from the _argv[args+1] string that was just parsed) is not a null terminating character or the num_encoders_to_fuzz is less than or equal to zero - that is to say there are characters in the arguments when there should only be numbers at this point, or the number of encoders to fuzz was improperly set - then print the proper usage of the invocation arguments and exit.

if(strtol_str[0]!='\0' || num_encoders_to_fuzz<=0) {
            print_usage(_argv);
            return EXIT_FAILURE;
        }

Otherwise, continue evaluating the command line arguments and check if the num_setting_changes is set properly by the third argument using the same logic of the previous condition.

num_setting_changes=strtol(_argv[args+2], &strtol_str, 10);
        if(strtol_str[0]!='\0' || num_setting_changes<=0) {
            print_usage(_argv);
            return EXIT_FAILURE;
        }

If this is true, increment args by 3. Otherwise, exit.

        args+=3;
    }
    else {
        print_usage(_argv);
        return EXIT_FAILURE;
    }

The args increment at the end will make the while condition evaluate false, so all this to say - the loop only evaluates once so it makes sense that SVE2 instructions wouldn't apply here. There would be no benefit to simultaneously running a loop that can only execute once.

Conclusion

In conclusion, it's been interesting looking at how SVE2 optimization can benefit an open source library. This is a cool technology that will no doubt become pervasive very quickly and have widespread benefits, especially for large data processing libraries such as this. I explored some different ways to make use of it through compiler intrinsics as well as autovectorization, some attempts were challenging and less fruitful while others seemed to find purchase and successfully optimize opus' encoding functionality. I broke down some code that was optimized and some that wasn't and the reasons why, and gave a closer look at the disassembled code compared to its source to see how the compiler implements SVE2 for us and why.

I hope my work can be useful to those interested in implementing SVE2 in their own projects, or to the maintainers of the opus project. The latter might find those tests that I couldn't get to pass with autovectorization to be a good place to start, as the "core dump" error message means that the qemu-aarch64 argument wasn't applied to those tests at runtime as I couldn't determine how to apply it in those cases. Doing so would likely cause all tests to pass and allow the entire library to take advantage of SVE2.

This project and this course at large have been very useful in changing my perspective on programming and allowed me to get much closer to the metal than I have before. It's cleared up many misconceptions about how computers treat data - to paraphrase my professor, "Your other teachers probably told you variables are stored in memory - they lied." This project and course have been full of little epiphanies like that that I think have been influential in refining my concept of programming and I'm glad I was able to have this experience before graduating. Thanks for reading.

by Gus McCallum at Fri Apr 22 2022 19:43:19 GMT+0000 (Coordinated Universal Time)


Gerardo Enrique Arriaga Rendon

Reaching the final stage

This post should have been posted three weeks ago, but I didn't write it, so let's imagine that I wrote it three weeks ago :)

We start to enter the final set of releases: 3.0, this time, starting with its alpha version. Telescope has come a long way, and there will be some major changes that are still pending before Telescope 3.0 actually reaches.

A few things that I have to start working on are related to the Postgres backups that I promised a long time ago. Since now are going to use a real database ("real", as in persistent, sorry redis), we have to worry about the data we are storing, and the way we can take care of the data is by making backups and storing those backups in another place.

Well, we decided to break that issue in two steps: Figure out how to make the backups at all, and then figure out how to store those backups. At least, when we finish the first one, we will at least have some backups (although they are not going to be store in a separate location yet).

Also, I started to prepare writing the tests for the dependency-discovery service. More specifically, the tests for the routes /project and /github. To prepare for it, I went over the jest documentation, since my experience on writing tests is almost zero...

Either way, look forward to the new release!

by Gerardo Enrique Arriaga Rendon at Fri Apr 22 2022 19:20:36 GMT+0000 (Coordinated Universal Time)


Thanh Van

3.0 Release - Final Milestone

Today is the last day of this semester, I wonder why time flies so fast. I still remembered that I was just in the first week, but now it's the 14th one. Our final result of this semester is to ship Telescope from v2.5 to v3.0, which adds a lot of features, new technologies, etc.

My Contribution in 3.0

Because of the workload in the last few weeks, I was trying to balance the time I spent for Telescope, as well as other courses. In v3.0, both Francesco and I tried to finish up the sign up flow, which basically allows a user to include their Twitch or YouTube channel when signing up. A lot of decisions were made, at first, we decided to add another <TextInput> field in the same page, where user provides their Blog Url, then when we had the live session with Dave, we decided to put all the links inside the same field and separated by a space. On the other hand, when I talked to Francesco, he wanted to have a different page, where a user could provide their YouTube and Twitch channel, and it is an optional page.

The problem we were having was that we did not know how to make the page be optional with formik. By the end of the day, I realize that it is not about how we configure it in formik, it is about how we get the data (YouTube/Twitch url(s)) on that page, and because those datas are optional, so we don't really require it from a user. The code below is where I know that how to make the data is optional:

Yup.object().shape({
    [channelUrl.name]: Yup.string(),
    [feeds.name]: Yup.array().of(Yup.string()),
    [allFeeds.name]: Yup.array().of(Yup.string()),
    [channelOwnership.name]: Yup.boolean().test(
      'agreed',
      channelOwnership.invalidErrorMsg,
      (val) => !!val
    ),
  }),

We finally have the PR to finish up the sign up flow for Telescope. And I also re-enable Sign Up button in this PR.

OSD700 Recap

From the concepts I learned in OSD600, I am currently applying it in OSD700 such as how to work in an open source project, how to maintain them, and especially this course, I learned how to be a sheriff. I already had a blog post of what OSD600 is about, and what should we expect when working with an open source project, see it here.

I would say that the version of me after taking this course compared to when I was in OSD600 is totally different. I am more active now, I don't be hesitate to talk to people (thanks to Sheriff's works), create relationships, and especially I was having the new issues when using git, and I had so many new knowledges after all. Another thing I would say that is the organizational skills are also improved, which means I usually have a list in order when I am doing something. For example, when leading the meeting, I would prepare a day before the meeting to know what should I speak, as well as when doing the PRs, I have to list everything I need to do and also take notes of what my colleagues suggest me.

Finally, I have a big thanks to Dave - a wonderful prof always supports his students when they need, as well as telling them what lessons they should learn after solving some tricky issues. If you have your programming student life in Seneca, Dave is one of the profs that you should not miss when selecting your courses.

by Thanh Van at Fri Apr 22 2022 18:33:52 GMT+0000 (Coordinated Universal Time)


Diana Belokon

OSD600 + 700, i love you

Thoughts about OSD600 and OSD700:

OSD600 and OSD700 are AMAZING!!!

Out of all the courses that I have taken, I think that the ones that taught me the most about real life developer work experience, were OSD600 and OSD700.

MeaningFULL

YES, some days I really felt like giving up because I couldn't land a PR on time, I was not in a mood to write a blog, I didn't know what I was doing with git, I had no idea how to implement something, or debug something. It was frustrating. But honestly, everything I was doing was a contribution to a real open source project. It felt like each PR had a meaning. It wasn't just a stupid meaningless assignment. The features and the bug fixes were all useful to the project. It was not just a zip file of a visual studio solution. IT IS THE ACTUAL FEATURE IMPLEMENTATION AND BUG FIX.

Respect + How to Be a True Developer

What's also good about this course is that everyone respects your hard work. We all know how hard we all work and we don't judge or neglect anyone's effort. We review, we help implement, we discuss, we sheriff, we encourage. Even the people that aren't close to you, they still help you. I didn't know people at college can be like that. Because from my experience, no one wants to help. Everyone is just busy with their business. Feels like we live in a cold world sometimes. But OSD600 and OSD700 are making me feel like this world has still some hope hahaha. Really. It makes me think that people at work and school can be like a family, a supportive community, in a way, too. This alone, is what makes these two courses feel like a coop like experience + it teaches you how to be a developer and help other developers.

Freedom, not Slavery

You get to decide what to work on (what feature, technology, development area, etc.). And no one expects you to know how to implement something from scratch. You can start with 0 knowledge and make a draft PR. And people are going to support you from day 1. If you get stuck, they gonna unstuck you. You can freely break stuff and try stuff. Just fix it afterwards :) or someone can jump on a call with you and work on it together.

Also, there are other contributors, like Seneca College alumni, PROFESSOR, and other experienced developers that you can work together with. This is the only course that has this cool side to it:DDDD Feels like you are in the same team as your professor and many other talented people. It is not just a school stuff kinda vibe.

Furthermore,

I believe that everything comes from a family or people that surround you. And I think that it is so true when it comes to this course. Everyone is super passionate about what they are doing. Everyone cares. It makes you wanna care, too. It is amazing. And the professor, is like a father, in a way. He knows your struggles and he tries his best to help you in any way he can. He can be flexible, strict, understanding, encouraging, supportive, and easily approachable. I only know two professors like that, including him. And I have had experience studying in 3 different countries, and to find these kind of people that care about their students or just in general about other people, is super hard. They are rare.

Conclusion

It is super hard, but the whole vibe is really nice. It makes you feel passionate and caring and encouraged. Taking this course makes you wanna improve yourself, weaken your weaknesses - even the most fearful ones.

by Diana Belokon at Fri Apr 22 2022 01:44:17 GMT+0000 (Coordinated Universal Time)


Tim Roberts

Semester in Review

This is it, my final blog for the semester. It's been quite the ride. To start off I will briefly go over what I have done to contribute this past week for the release of 3.0. I will then do a reflection on the course and give some insights into my experience.

Release 3.0

This week was a pretty busy week, what with exams and final projects being due. Even still, I was able to get some things done before we ship 3.0.

I was able to land my Octokit PR that I have been plugging away at the last little bit. I ended up not only adding the Octokit package to the dependency-discovery, but I also did the same in our Status microservice since the throttling, and retry plugins are included in the main package. I also ended up adding throttling and retry functionality to the dependency-discovery since that was the entire point of updating the package.

That was my only PR for this week. I did however do some reviews to try and help others get stuff landed for the final release of the semester. I first did another review of Tuee's PR for removing the legacy backend. This PR is an important one. We want to get it landed for 3.0 so we can start using all the cool new Supabase stuff that has been added this semester. I had some issues testing it, but after clarifying with Tuee I was able to spin it up and approve it for merging.

The other two reviews I did this week where a little simpler to complete. The first one was a bug fix by Francesco. He just disabled some of the logging that was cluttering up the Docusaurus build log.

The other review was a quick patch fix made by the professor himself to try and get our CI tests to not consistently fail. This would make it so people can get stuff landed for the final release, but we would have to properly fix it at a later date.

Reflecting

Looking back on the course and I am pretty proud of my accomplishments. In the beginning I had set out to improve my skills with Docker and the backend stuff in general. I took on, within reason for the amount of work I was able to do, any and all issues related to Docker.

This eventually lead to me taking responsibility for, and 'owning' the launch of our Docker Registry. Even though I 'owned' it, a lot of work and help was provided by a number of others. Josue in particular was a great help since he has such great understanding of the project as a whole. I am especially appreciative of his help because he is not even a student this semester, all the help he gave was during his own free time.

A lot of what I did for getting the registry up and running was just getting things like PR's started to the best of my ability, and then post it as a draft PR and ask for help putting the last few pieces together. It was nice working in such a collaborative manor.

I can especially appreciate the collaboration because at my place of employment, I am the lone developer after a senior dev took another opportunity a few months ago. It's hard working through things as a junior developer by yourself. Having many minds on a single issue can not only get the task done sooner, but also save an individual from getting stressed and burnt out working on issues that are giving them troubles. This has shown me that I do not want to work at a start up any more, I would rather have a team I can depend on and break through issues with.

So, while a lot of the testing of the registry setup was done by Josue and Dave. I still made sure it all happened. During the triage and planning meetings, I would make sure to ask what direction, or what the next steps would be for the registry. I was driving the issues and making sure things got done by following up or starting thins on my own. That is why, even though I feel like I was guided through a lot of what had to get done code wise, I still feel like I can honestly say that I 'owned' the issue, and that I was responsible for getting the registry up and running.

One problem with me taking ownership of the registry is that I put all my attention to it and was not keeping up with what everyone else was doing. Yea, I knew the big picture things like adding Supabase, or getting the Parser service up and running. When the registry launched there was still about a month left of the course and when I was looking at issues to pick up and help with, it felt like I was so far behind. I felt that since other people had been working on things related to these issues that it would be far easier and take much less time than if I were to try and bash away at it.

A prime example of the is how long it took me to do the Octokit stuff. Jerry had mentioned it would be a relatively quick and simple PR to do when I took it over from him. Looking back on it, it was, but I had to do a bunch of research to just find out what Octokit was, and then I was uncertain of how and where we wanted to use the expanded package. Overall, it took me far longer to get the PR landed than it would have taken Jerry, but he had other, more important issues to try and tackle.

I tired to make up for this by getting in on more reviews, but most every time I went to check if something needed to be reviewed and everything was marked with changes requested so I left it alone. I also tried looking into some of the Supabase issues, but I haven't been following it at all so it was quite daunting to try and step in to fix things when I don't even know how to spin it up.

Wrapping up

Just like with the previous course, this course was another great example of how to give students real life work experience that will look good on most any resume. I feel much more confident in my Git abilities and feel like I have learned a lot about Docker, nginx, registries, and workflows/yaml files.

These two course should be core curriculum, but I understand why they are not. There is only one David Humphrey, and without him I find it hard to see these courses being as successful as they are. He really goes above and beyond what most of the other teachers do. I just wish I had known his cloud computing course was going to be available to my program this summer. If I had a time machine I would gladly take that course over the Azure Fundamentals cloud course I just completed this semester.

by Tim Roberts at Fri Apr 22 2022 00:40:36 GMT+0000 (Coordinated Universal Time)


Jia Hua Zou

Telescope 3.0 (soon)!

The release of 3.0 is tomorrow!! This project has come a long way. There are lots of new technologies implemented in a 14 week period. I had learn a lot and I am very glad to have this unique experience with other students(and contributor).

I haven't been posting due to my other workload from my other courses. Now that it is done I can focus my attention to Telescope. Since there isn't much time before the 3.0, I will be helping out other people's issue by either trying to help them or review on their PRs. Other than that there isn't much to say.

I want to say thank you David for teaching this course (His other courses are 😫 👌 💯 🔥) and the team members that are with me till now. This was a very unique course and I'm glad it was not a regret. I am planning to continue to contribute to Telescope after this semester.

by Jia Hua Zou at Fri Apr 22 2022 00:18:08 GMT+0000 (Coordinated Universal Time)

Thursday, April 21, 2022


Gus McCallum

Optimizing a Program Through SVE2 Auto-Vectorization

Today I'm going to be taking another look at the volume scaling algorithms we benchmarked in my last post with the goal of adding SVE2 optimization and further improving the runtime. Because we're using SVE2 we need to make these changes on either vol4.c or vol5.c, as those are the AArch64-specific algorithms that take advantage of inline assembly and intrinsics, respectively.

To make things simple I'll use the first candidate, vol4.c, which uses inline assembly. The full code is as follows:

int main() {

#ifndef __aarch64__
        printf("Wrong architecture - written for aarch64 only.\n");
#else


        // these variables will also be accessed by our assembler code
        int16_t*        in_cursor;              // input cursor
        int16_t*        out_cursor;             // output cursor
        int16_t         vol_int;                // volume as int16_t

        int16_t*        limit;                  // end of input array

        int             x;                      // array interator
        int             ttl=0 ;                 // array total

// ---- Create in[] and out[] arrays
        int16_t*        in;
        int16_t*        out;
        in=(int16_t*) calloc(SAMPLES, sizeof(int16_t));
        out=(int16_t*) calloc(SAMPLES, sizeof(int16_t));

// ---- Create dummy samples in in[]
        vol_createsample(in, SAMPLES);

// ---- This is the part we're interested in!
// ---- Scale the samples from in[], placing results in out[]


        // set vol_int to fixed-point representation of the volume factor
        // Q: should we use 32767 or 32768 in next line? why?
        vol_int = (int16_t)(VOLUME/100.0 * 32767.0);

        // Q: what is the purpose of these next two lines?
        in_cursor = in;
        out_cursor = out;
        limit = in + SAMPLES;

        // Q: what does it mean to "duplicate" values in the next line?
        __asm__ ("dup v1.8h,%w0"::"r"(vol_int)); // duplicate vol_int into v1.8h

        while ( in_cursor < limit ) {
                __asm__ (
                        "ldr q0, [%[in_cursor]], #16    \n\t"
                        // load eight samples into q0 (same as v0.8h)
                        // from [in_cursor]
                        // post-increment in_cursor by 16 bytes
                        // and store back into the pointer register


                        "sqrdmulh v0.8h, v0.8h, v1.8h   \n\t"
                        // with 32 signed integer output,
                        // multiply each lane in v0 * v1 * 2
                        // saturate results
                        // store upper 16 bits of results into
                        // the corresponding lane in v0

                        "str q0, [%[out_cursor]],#16            \n\t"
                        // store eight samples to [out_cursor]
                        // post-increment out_cursor by 16 bytes
                        // and store back into the pointer register

                        // Q: What do these next three lines do?
                        : [in_cursor]"+r"(in_cursor), [out_cursor]"+r"(out_cursor)
                        : "r"(in_cursor),"r"(out_cursor)
                        : "memory"
                        );
        }

// --------------------------------------------------------------------

        for (x = 0; x < SAMPLES; x++) {
                ttl=(ttl+out[x])%1000;
        }

        // Q: are the results usable? are they correct?
        printf("Result: %d\n", ttl);

        return 0;

#endif
}

To start, we need to include the relevant library by adding an include.

#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
#include "vol.h"
#include <time.h>
#include <arm_sve.h>

#ifndef __aarch64__
        printf("Wrong architecture- written for aarch64 only.\n");

Next, I changed the duplicate instruction's destination to the z register as per the SVE2 standard.

__asm__ ("dup z1.h,%w0"::"r"(vol_int)); //duplicate vol_int into z1.h
...
"sqrdmulh z0.h, z0.h, z1.h      \n\t"

Next the makefile that we use to build the program needs to be changed to trigger the use of SVE2 by the compiler.

vol4:    vol4.c vol_createsample.o vol.h
         gcc ${CCOPTS} vol4.c -march=armv8-a+sve2 vol_createsample.o -o vol4

And finally, when running it we need to make sure to add the qemu-aarch64 argument to specify that we'll be emulating the appropriate hardware to run SVE2, as the real thing isn't available to us yet. I ran it with the following command and confirmed it worked as intended.

qemu-aarch64 ./vol4

This has been a quick exploration of making use of autovectorization to implement SVE2 in a program. Enjoy!

by Gus McCallum at Thu Apr 21 2022 22:44:00 GMT+0000 (Coordinated Universal Time)


Thanh Van

3.0 Planning

The semester is going to an end, and we have to get our works done in the final release. This is the quick updates of what I have done so far.

Allows User to Include YouTube URL in Sign Up

We already added support for YouTube feed discovery, and now we are trying to update the UI to allow a user to include their YouTube channel along with their blogs. There are also the same issue with me, which allows a user to add their Twitch channel when signing up.

At first I thought this issue should not take a lot of time because we already had the support for YouTube feed discovery. The only thing I have to do is update the UI so the user can provide their YouTube channel link. Moreover, YouTube already provides the RSS feeds for YouTube channel by https://www.youtube.com/feeds/videos.xml?channel_id=[channelID]. However, Dave made a bunch of changes in PR #3488 to allow feed-discovery handle multiple feed URLs, and also different type of feeds, this is the most important one in terms of allowing users to include their YouTube and/or Twitch channel when signing up.

Both YouTube and Twitch features have to be done in the final release, so the next student generation can register their Telescope account on Telescope website.

2nd Time Becoming a Sheriff

This week I had another chance to become a sheriff with Jerry, it was wonderful because I would know the works of sheriffs on the week of release, and the week of planning. Some PRs I reviewed along with the YouTube issue I had:

by Thanh Van at Thu Apr 21 2022 22:08:32 GMT+0000 (Coordinated Universal Time)

3.0-alpha Release

I would say this week was great to me, and also it was the most terrible week I ever had in this semester. This was the first time becoming a Sheriff, and also I had to redo the entire project I had for another course.

1st Time Becoming a Sheriff

In Telescope team, everyone should become a Sheriff alternatively, at least they should be once, or twice, and there would be 2 sheriffs each week. This is a good experience for those who wants to know what their future jobs would be, and what they should expect it the meetings.

At first, I was scared a bit of becoming a sheriff, because I had never done this before. I talked to Diana a week before we became sheriffs. She was worrying about this too, but I tried not to worry and have to come over the fear and encouraged her. Personally, I didn't mind talking for the whole meeting, we planned to lead the meeting by one person, but Diana changed her mind to separate into half and half, which meant one of us will lead half of the meeting, and the other one would continue to lead it. I was not clear at first about what she wanted, but I followed her idea since this was not a big deal.

Another reason why I was not afraid of leading a meeting because every team members would have their first time becoming a sheriff, so I thought they would know the feeling of this. I did not put too much emphasis on being a sheriff because it was just a meeting between the team members by the end of the day. Simple thoughts made me not worry about anythings, and I found it was effective sometime.

Sheriff Duties

This week I was encouraging members to continue working on their jobs, and actively asked for help if they needed. We already prioritized which one was more important to get done in 3.0-alpha, so there were no new issues assigned to our contributors, they just needed to focus on what they were having. Some PRs I reviewed:

  • As we added Supabase, and we would keep data in it, so we would need to have a script to run the backup. PR #3405
  • Some minor changes in Docusaurus, and update the colors in PR #3406
  • Adding tests for /github/:project in PR #3412
  • Managing database model and migration with Prisma in PR #3418. Duke always tried to put the new stuffs into Telescope, I really admired him!!!
  • Allows users to press the Enter key when using the search bar in PR #3432

3.0-alpha Release

We decided to do the release on Friday instead of Thursday, which allowed others to have some times to do their works. Right now the 3.0-alpha version of Telescope is released, check it out at https://telescope.cdot.systems/

Diana wanted to do the release because she told me she had never tried it before. Luckily, she was the only one that did the release without any obstacles. It was very funny that she was nervous during the meeting and she always asked before she clicked or entering somethings 😅.

Conclusion

Beside OSD700, this was not the first time I led the meeting, so this was another chance for me to practice leadership skills. I always remember this one in mind: "If we cannot do it alone, we will do it together".

by Thanh Van at Thu Apr 21 2022 21:33:28 GMT+0000 (Coordinated Universal Time)

3.0-alpha Planning

The Plan for 3.0-alpha

The semester is almost coming to the end, everybody is spending time on preparing for their exams, so we were planning to prioritize which features/issues needed to get done in 3.0-alpha and also 3.0 release. This week, we decided to focus on the features that we need for the final release, and removed anythings that was not important out of the milestones.

My Contribution

I did not do a lot of works in terms of putting the new codes in Telescope, however, I tried to spend some times reviewing others' PRs. Even though it was "PR reviewing", but I thought it would take almost the same time as I was putting the new codes. Some of the PRs that I reviewed in this week:

  • Fix Docusaurus broken links
  • Get rid of Satellite information from Dashboard
  • Run a Satellite instance to provide healthcheck route
  • Add GIT_COMMIT override for local environments
  • Beside getting rid of Satellite old links in Dashboard, Joel also added new links to the sidebar such as Portainer and Supabase production & staging

Final Thoughts

When I reviewing the PRs, I would say that some of them would not take a lot of time to review and test it. However, there were also PRs that I spent hours seeing what happened with it as well as how to tested it out. Some time I had some failures because I didn't run pnpm install when pulling the new changes from upstream, I realized how clumsy I was even though I do it almost everyday :D

by Thanh Van at Thu Apr 21 2022 20:42:15 GMT+0000 (Coordinated Universal Time)


Joel Azwar

Telescope 3.0 : Alpha and Release

First, sorry for the lack of updates. I know I'm missing a lot of details from the previous Telescope post. But to sum it up, lots of new features and fixes were shipped throughout 2.8 and 2.9 but since 3.0 alpha which was released 2 weeks ago, we've had to shift our priorities towards the mainline features that were to be shipped in the 3.0 major release which we're shipping this week.

Supabase Progress

So we've finally got our own Supabase client running on staging, and we're planning to deploy it to production for 3.0 So that's really exciting. 

Jerry has been lending a hand by coming up with more tables we can use for more info (i.e. github_data) Not only that but he's written a script to backup our Supabase/Postgres data in the events of a failure, that's really neat! This process involves starting a temporary container for the purpose of running the backup script.

We've been migrating things like accounts and feeds to the Supabase tables. But when we want to alter these tables, like say add row level security, there's a few ways you could do it. First, you could just go to the supabase studio and insert it manually with the GUI they've built. Or we could use Prisma, an ORM tool for nodeJS that Duc had suggested. This way, we generate an sql file with the prisma script, (which includes a name and date record to keep a history of the migrations) edit in all your sql statements (ALTER, INSERT, etc.), then push it to upstream for it to be ran in prod/staging.

So it's exciting to see Supabase really coming to its own in Telescope, there's still lots of potential for it to grow so I can't wait to see where it goes from here.

Status Service

So for alpha 2 weeks ago, I worked on a few issues regarding the status service. They were pretty minor, but I wanted to do what I can in contributing to the alpha and major release so I fixed an issue that's been hanging around in our status service for a while. We have a contributor card that shows a github name and avatar for the last Telescope commit. The avatar image is running into some sort of CORS error and doesn't load.

After some time debugging, I found out all it took was to add the "crossorigin=anonymous" attribute in the img tag. And just like that the contributor card shows the avatar! Another bug down for 3.0 :) Right now, I think we just need to worry about the rate limit for the API, since that will affect the contributor and commit cards for all its info.

Major Release 3.0

It's already the end of the semester and we're going to release 3.0. It's been a great experience working with telescope and I'll sum it up in a different blog post. But that's the progress so far and we've just got one more release to deploy.

Thanks for reading :)

by Joel Azwar at Thu Apr 21 2022 12:25:00 GMT+0000 (Coordinated Universal Time)