Planet CDOT (Telescope)

Monday, April 12, 2021


Yuan-Hsi Lee

First Attempt at Intersection Observer

Thanks to Pedro, I learned a cool API this week and applied it in our favorite project telescope.

I was assigned an issue to accomplish the new banner design, which is to make the navbar appear only when the user scroll to the timeline. In other word, the navbar should not be showed in the banner, but, it should be showed when the first post is at the top of the screen.

To accomplish this feature, I need to track the element in the current screen. The API that Pedro introduced to me is Intersection Observer. The basic idea of this API is to observe the element in the viewport. The function isIntersecting() tells you if the element that you're observing is still in the viewport. It doesn't have to 100% in the screen or 100% out of the screen, the observed percentage can be configured as well.

Therefore, since our goal is to show the navbar when the user scrolls to the timeline, and the first post is at the top of the screen. In other word, the navbar should be showed when the banner is totally gone. We put the observer to our banner.

In this PR, I move into

, so that I can separate it from the . Moreover, to add props for and in order to share the result of intersection observer. After moving the position of navbar, I also need to add navbar to about page and search page, because all pages used to share the navbar when it was in the banner header. The other changes that needed to make is the CSS styling of navbar in about page. In about page, uses a certain color from our palettes. Once we move navbar to about page, the navbar will be applied with these color because navbar icons all have in their links. By adding another styling setting to , the color can be overridden. I'm still thinking if there is a better way to do so, since the original styling for navbar should not be overridden, and the solution should not be overridden to correct.

It was an amazing experience to work with something new. Thanks to Pedro and Dave's help and suggestion in this PR! The initial thought was actually using React context. However, it is too complicated to use, and since we only need to share the state with one component, we can do it without using context. But, I'm definitely gonna check out more practices of React context.

by Yuan-Hsi Lee at Mon Apr 12 2021 01:15:16 GMT+0000 (Coordinated Universal Time)

Saturday, April 10, 2021


David Humphrey

Shipping Telescope 1.9.5

Yesterday the team shipped Telescope 1.9.5, and brought us one step closer to our 2.0 goals.  I spent a lot of time mentoring, debugging, reviewing, and also writing code in this release, and wanted to write about a few aspects of what we did and how it went.

First, I was struck by the breadth and depth of things that we had to work on during this cycle.  I was talking with a friend at Apple recently, who was reflecting on how complex modern software stacks have become.  I've been programming for over 30 years, and he even longer, and neither of us has ever reached the promised future where a NEW! language, framework, tool, or approach suddenly tames the inherint complexities of distributed systems, conflicting platform decisions, or competing user expections.  Reading through this release's code changes, I wrote a list of some of the things I saw us doing:

  • Writing Dockerfiles
  • Dealing with differences between docker Environment Variables vs. Build Args
  • Babel configuration wrangling
  • Elasticsearch
  • Redis
  • Browser origin issues
  • Nginx proxies and caches
  • OpenID
  • Firebase and Firestore mocking in CI and development
  • Traefik service routing
  • Combining SAML Authentication with JWT Authorization
  • Using Portainer to administer containers
  • Writing good technical documentation
  • Dealing with different ways of running the same code: locally, CI, via Docker, production
  • Dealing with failed Unit tests due to timeouts in CI
  • Understanding mock network requests in tests
  • Dependency Updates, some with breaking APIs
  • Configuring tests to be re-run on file changes (watch)
  • Dealing with authenticated requests between microservices
  • Writing e2e tests using Playwright
  • Hashing functions
  • Different strategies for paging results in REST APIs
  • Role-based Authorization
  • HTML sanitization
  • REST API param validation
  • TypeScript
  • Vercel Environment Variables
  • Material UI
  • Implementing multi-step forms in React
  • User Sign-up Flows
  • Accessibility
  • Intersection Observer API
  • React Refs, Context, custom Hooks
  • Scroll Snap API
  • Polyfills
  • Updating and Removing legacy code
  • Mocking database tests
  • HTTP headers

This list is incomplete, but helps give a sense of why Telescope is so interesting to work on, and how valuable it is for the students, who are getting experience in dozens of different technologies, techniques, and software practices.  The funny thing is, If I presented a new course that covers all of these topics, I'd be shot down in a second.  I have some colleagues who are convinced that the best way to learn is by working with toys and sheilding students from the realities of modern software; I disagree, and have always favoured doing real things as the best way to learn how to prepare for a life of software development, which stops being neat and tiddy the minute you start doing anything other than closely scripted tutorials.  We don't help our students by sheilding them from all the complexities of what they must eventually face.

I had a former student email me recently, who was struggling to reconcile how she felt about programming with what it actually was now that she was doing it fulltime.  A lot of what she said sounded familiar to me, and also very normal.  Rather than perceiving her discomfort as a problem, I recognized it for what it really is: the impossible demands that our software requires of us vs. how well equipped we are to meet them.  Programming isn't something you learn in 24 hours, one semester, or during a degree.  This is a long, winding road, and accepting that it's hard for all of us is an important part of not giving up.  Not giving up is 90% of what you need to be a good programmer.

So how do I get students to work on code like this?  First, I don't expect perfect results.  We work in smalls steps, issue by issue, pull request by pull request.  We get it wrong and correct our mistakes.  We struggle through trying different approaches until we land on one that feels right.  We help one another.

This past week I saw a lot of students working together on Slack and Teams to write fixes and do joint reviews.  The move to virtual learning has opened the door to much greater collaboration between students: anyone can share their screen with anyone else in the project at any time, and it's easy to "let me show you what I'm struggling with here."  I'm also fascinated at how students will join in on calls even if they weren't invited, knowing that their presence there will be welcomed rather than met with questioning looks.  This openness to collaboration, and to each other, is exactly what I've sought to build for many years.

On Thursday I spent most of the day stuck on writing one tricky end-to-end test for our authentication flow.  No matter what I did, one of our microservices kept returning a 200 vs. 201, even though the code never returns a 200!  I tried everything I knew how to do, writing, rewriting, and testing from different angles.  Nothing worked.  Eventually I reached out to Chris and Josue, who were just coming online to try and write some tests together.  Sharing my screen and talking to them for 5 minutes completely unblocked me, and was worth more than the 5 hours I'd already spent: our tests were silently automocking fetch(), and every request resulted in a 200.

I've also seen the quality of reviews continue to increase, week after week.  We've been at this for months, and in much the same way that a runner can slowly add more volume every week, or a lifter increase their benchpress in tiny increments, the ability of the group to review each others code and spot issues has gotten better and better with practice.  In the beginning, I had to review everything, and I still do one of the reviews of most PRs.  But more and more in this release I saw PRs land that I hadn't read, but turned out to be well reviwed by two or more other students.  It's been great for me to have my own code reviewed too, since I need help as well, and I've been able to fix many things through the students catching my mistakes as I worked.

Despite all the positives I've seen with collaboration and review, I also struggle to overcome some behaviours.  For example: merging code without reviews; hesitancy to review things that weren't specifically assigned to you; assigning everyone to review a PR, ostensibly meaning that no one is assigned.  Review is hard to teach, and easier to learn through experience.  Reviewing code is how I write most of my code: suggesting fixes or simplifications.  I also learn all kinds of things I don't know.  The students assume that they can send me anything to review, and I'll already know how it works, or at least understand the tech they are using.  Often I don't and I have to go read documentation, or write sample code, before I can provide useful feedback.  As a result, review is as much a documentation and educational process within a project/community as it is a chance to improve how things work.  If you don't request reviews before merging, or you don't get involved in giving them for other people's code, you miss the chance to build a subset of the project that understands how something works.  If you want to be the only person who can ever maintain or fix bugs in a piece of code, go ahead and do it all alone, because that's how you'll stay.

Another struggle I have is trying to figure out how to get people to push their code for review well before the day we ship.  I've yet to see a PR that gets reviewed and lands in a single iteration without changes (my own included).  I know I'm not capable of writing perfect code, but some of the students are still learning this the hard way.  It takes several days for a fix to get reviewed, tested, rebased, and landed.  However, yesterday a bunch of people showed up with unreviewed code changes that they expected to be able to ship the same day.  On the one hand, this is easily solved by being a ruthles product manager, and simply refusing to include things in the current release.  If we were in industry, this type of  bevaviour would result in people losing their jobs, as the rest of the team lost confidence in their colleague's ability to estimate and ship on time.  But this isn't industry, and these aren't employees, so I do my best to help people finish work on time.  Doing so means that mistakes get made, and yesterday's release wouldn't autodeploy to staging or production because of some missed environment variables for one of the containers.  People dropping unfinished code on the floor and walking away, expecting someone else to clean it up, isn't a great strategy for success.

Yet all of this, the victories and defeat, the frustrations and successes, all of it is what it's like to embrace the grind of software development.  If you're not up for it, or don't enjoy it, it's probably good to understand that now.  Most of what we do isn't hard, it's just tedious work that needs to be done, and there's no end of it.  What looks like magic from the outside turns out to be nothing more than a lot of long hours doing unglamorous work.  A release is a chance to let go, to finally exhale after holding our breath and concentrating for as long as we could.  It's nice to take a moment to breath and relax a bit before we do it all again, hopefully a little stronger this time.

by David Humphrey at Sat Apr 10 2021 16:03:43 GMT+0000 (Coordinated Universal Time)


Royce Ayroso-Ong

Let’s Get the Train Going

Talking to the right people, juggling branches, and reviewing issues

Hey all! This past week I got a couple of my PRs merged (see here and here) and now it’s full steam ahead with accessibility. Below is essentially the code in my latest PR that solves one of the most troubling thorns in my side. With my element decorator PR merged, I can finally single out BlogSpot posts and style them accordingly — and with just 4 lines of CSS code, the sizing issue saga can finally come to an end.

/**
 * Custom styling for different blogging platforms.
 * Known hosts so far are: [medium.com, dev.to, blogspot.com], otherwise the class is "is-generic".
 * To add to this list see the Post.tsx file, under the `extractBlogClassName()` function.
 */
.telescope-post-content.is-blogspot img {
  display: block;
  width: auto;
}

So what have I been doing this week? I’ve continued my accessibility conversation and I’ve taken this issue off of Yuan’s hands. I’ve been trying my best to review some PRs — but I’ve been wondering what the best course of action when it comes to PRs that I am not selected to review. I assume that they are looking for people who are familiar with the issue, and in that case, I don’t want to assign myself and start giving my critique. On the other hand, I know that some of these PRs are in desperate need of review since some of the people selected are too busy with their own stuff. I think how I will approach the following week is just to take my time to give a review whether I am on the list or not and if it was not warranted then it is what it is, they can just dismiss it right.

Lastly, once I get the code for the issue above running, then I will start to take Yuan’s advice and “…start checking all pages and components, see if the font size and colour pass WCAG 2.0.” (guideline here). Like I said in the last week’s post, I want Telescope to be able to be enjoyed by everyone.

by Royce Ayroso-Ong at Sat Apr 10 2021 03:22:48 GMT+0000 (Coordinated Universal Time)


Abdulbasid Guled

DPS911 Blog #12: The beginning of the end

Only 2 blog posts until I can stop writing these things...thank god.
Well, maybe I'll drop the odd post here and there!

After last week's bout, things sorta resolved themselves just nicely. With less school work to do this week, this was the week I've been waiting for to really just relax and take some time to look after myself, which included going for some walks. The weather was amazing this week, you can hardly blame me for that.

In terms of reviews, I got back to doing them, and I did alot. David ended up bearing the bulk of PRs I reviewed this week since the main story was getting a fully functional user service. That, and some other fixes related to our staging and production servers. Talking about all of them would mean I'd be writing this blog post forever, so here's a list of all the PRs he made that I reviewed:

These weren't the only PRs I reviewed of course. Today, I was with David helping Ilya get his search service ready to be merged in. This was a service we've been waiting for quite some time, as I had a PR that was blocked (This PR was here, however, like with the jest timeout PR I had, rebasing master onto here ended up closing the PR altogether. Good riddence I say, that was the last of my PRs with messed up timeframe of commits and the temporary fix was merged in by Josue not too long ago. I'll file a new PR to get this sorted out next week). Calvin's parser service also landed today, which means that all of the microservices we've been developing since 1.8 have now finally all landed.

I also did a number of reviews for some front-end related PRs. Namely, two from Yuan, which were the following:

She also that this PR that I reviewed as well to update our environment-setup docs to reference that they now use microservices. A good change, and one that will probably be changed again once the backend code is fully removed.

The more complicated PR was this one by Minh. He's updating the SearchBar design, and this means he needs to tweak with the new Search Context values that I provided in the new SearchProvider. The updates have been very slow, but since he's using logic that I made, I was able to provide my most extensive review ever on a PR. I think I've requested more changes in this PR alone, than I have in any PR I've wrote or reviewed this entire semester. His PR has been sitting for almost a month now and we can't have PRs sit in hell for that long of a time, so we're splitting parts of his code into another issue so we can land his PR in time for 2.0.

As for work that I did, well, I had a couple of PRs that got in successfully. My SearchContext bug was resolved, and it finally got merged in. You can find that PR here.

I also updated the jest e2e timeout settings so that our github actions don't timeout on the end-2-end tests and randomly fail CI for whatever reason. You can find that PR here

My big PR for this week, however, came in today after starting on it yesterday. Mo had two tasks on Satellite, of which, he was focused on the caching, while the other, the porting of redis, was sitting for a long time. I asked him if he was going to continue on with that, to which he mentioned that he didn't have the time. Knowing that my service, and Calvin's parser service needed redis in satellite, I finally decided to fork Satellite over and complete the port with his permission. I got a PR up with most of the code working, but the tests hanging up. While working with Ilya with his search service, I got some time with David to help debug this, and it essentially came down to how I was turning on and turning off redis.

The afterEach made alot of sense. Using redis.disconnect() would not work here, because the test would run again and fail due to a connection being terminated. So I had to use redis.quit(). The real issue was in my beforeEach. Starting the redis server here was tough. So on David's advice, I made the following changes in the redis.js file:

The try/catch was removed, the password code was removed, and I passed an options object to the function that returns a new redis constructor, using the REDIS_URL and the options object. In the test case, I called the createClientFunction in my beforeEach, and then, I used the done function that jest provides. It's an awesome function that helps with ensuring that asynchronous connections are properly handled with. I never wrote database tests before, so this is something I was amazed with. The redis test itself was very basic, just a simple redis ping command that returns "PONG". Applying the changes, and bam! The test closed up perfectly with no problems. A push and merge later, and I was good to go. I did rob Chris the chance to review it since I merged it without any reviews, which again, I need to stop doing. I have this feeling where if something's watching me code and it works fine, then I can just merge it in no questions asked. This is what let me to my problems last week, and luckily, that didn't occur this time. Satellite was updated and redis is now available publicly for all services that need it.

I wrapped up this week by reviewing and approving a PR by Josue to add the search service's correct values to our staging and production environment values. You can find this here.

The two main issues I want to tackle for next week are the health check for the post service (https://github.com/Seneca-CDOT/telescope/issues/1938) and the search service being used in the front-end (https://github.com/Seneca-CDOT/telescope/issues/2014). These issues should be simple enough to get in so I'll try and see if I can start them early on Sunday since I'll be busy with work this weekend. With our second-last update in, alot of PRs will be rushed in the next 2 weeks. 2 more weeks to go before my journey in open-source is on a pause. Until next week, see you guys next time!

A small PS: I got Ramadan next week starting on Monday most likely, which means I'm gonna be fasting for the next month starting next week. Happy Ramadan to all observing! This government shutdown and pandemic cannot come soon enough.

by Abdulbasid Guled at Sat Apr 10 2021 02:33:14 GMT+0000 (Coordinated Universal Time)


Chris Pinkney

Human Murmuration

Credulous at best, your desire to believe in
Angels in the hearts of men

​ The week started off not with a bang but with a whimper, as I opened an issue that Davedalf the White noticed during our weekly triage meeting. I've been trying to reproduce the bug, but like Pedro I just couldn't no matter how hard I tried. Software development can be strange like this sometime, but unfortunately if this can't be reproduceable fixing the invisible will post quite the challenge. (Edit: As of this Friday, during our weekly deployment meeting we're currently under the impression that this is no longer an issue.)

This was followed up by a quick microservice meeting to discuss the latest status and the massive changes required to fully land the Users Microservice. I'll talk more about that later but the clock is ticking, t-minus 2 weeks and counting from today's date.

Josue and I had a quick meeting with Pedro to discuss some typing issues related to his latest PR which updated the Dynamic Image component we use on Telescope's home page. The meeting was concluded by the suggestion to implement MUI Types. I kind of wish I had learned more about typescript and NextJS but I think I'm okay focusing mostly on the backend for now.

After the meeting, Josue and I reviewed a PR from Davedalf, following some excellent advice to not test this locally (as it's not really something that's testable locally), but instead read the code and the tests. It's hard to understate the significance of brief (yet detailed) code explanations.

I also reviewed Yuan's latest PR which adds a really that hides the navbar on the initial render of our website, but shows it as the user scrolls down. I pointed out an issue that (presumably) caused this PR to clash with another newly landed PR from Duke which adds (awesome) scroll snapping. Unfortunately I don't know enough to help beyond pointing out flaws. Living the dream.

On a similar note, I also left my thoughts (such as they are) on Huy's PR which touches up the author section of Telescope. I also reviewed another of Dave's PRs, yet again joined by Doc Josue, which is helpful given that I can barely read a lot of the code he's pushing out lately. Having someone to dumb things down for you is also helpful.

I also had a brief meeting with Dave regarding some issues he was running into when attempting to POST users to the Users microservice in some tests. Yet again Doc Josue was able to save the day, and a fix PR arrived shortly after. It turned out our backend was mocking our node-fetch requests, resulting in data not being sent to the microservice.

I finished off the night by leaving some thoughts about a weird code escaping bug, which was conveniently being caught by another blog post by yours truly. I've worked on our backend's sanitizer previously so fortunately I have a small bit of "insight" into what may or may not be causing the problem.

​ Friday, the week is over but we still have to deploy and land all our PRs before hand. Here's a flurry of PR's that I approved:

  • A PR which (finally) allows user authentication on our Vercel deployments. Something I've been waiting for for a while now.

  • and also ... a followup PR to fix something that I noticed just broke with the Vercel fix!

  • I also left some notes on Illya's search microservice.

  • Finally, I also approved a last minute fix from Doc Josue which fixed our latest prod deployment.

Users Microservice

​ We finally landed a fix for our paginated GET route (something I tried my best to review) which, admittedly, had me nervous for a few days. The problem we had previously was that the paginated GET route that I created only worked for users who had IDs starting from 0. Since we hash our user IDs, this obviously is not a solution. The solution is actually really clever: it relies on keeping track and embedding where we left off in the response's header so the subsequent request has all the information where to continue from in the response header:

const query = await db
    .collection('users')
    .orderBy('id')
    .startAt(userToStartAt)
    .limit(perPage)
    .get();

vs.

let query = db.orderBy(documentId()).limit(perPage);

// If we were given a user ID to start after, use that document path to add .startAfter()
if (startAfter) {
    query = query.startAfter(startAfter);
}

const snapshot = await query.get();
const users = snapshot.docs.map((doc) => doc.data());

// Add paging link header if necessary, so caller can request next page
addNextLinkHeader(res, users, perPage);
module.exports.addNextLinkHeader = function (res, users, perPage) {
    // If there aren't any results, there's no "next" page to get
    if (!users.length) {
        return;
    }

    // Similarly, if the number of users is less than the perPage size,
    // don't bother adding a next link, since there aren't going to be more.
    if (users.length < perPage) {
        return;
    }

    // Get the id of the last user in this page of results
    const lastUser = users[users.length - 1];
    const lastId = lastUser.id;

    // Construct the body of the header, giving the URI to use for the next page:
    // '; rel="next"'
    const link = new LinkHeader();
    link.refs.push({ uri: `${USERS_URL}?start_after=${lastId}&per_page=${perPage}`, rel: 'next' });

    res.set('Link', link.toString());
};

Clever fixes like these are amazing.

Aside from going over this PR a few times, Josue and I wrote a tool to help export users from the Planet CDOT Feed List (a list of Telescope users and their blog information.) It... actually didn't turn out bad at all! The code is easy to read, maintainable, and best of all, short. It went through a few rounds of reviews (a new personal best for me.)

I also started initial discussions and research into proper e2e testing for the Users microservice, and implementing our own Redis cache. More on that to come by next Friday.

Overall a good week which I spent a lot of time reviewing, commenting on stuff, and having meetings.

In other more personal news:

  • Been playing a bit of Assassin's Creed Valhalla, it's a solid 6/10 which (in the few hours I've played) has been otherwise enjoyable.
  • Been thinking about getting into Rust a lot lately. All this JavaScript lately has been making me itch to get back into a "lower level" language.
    • I really wish I had pushed myself more in OSD600 to try something new. I don't regret my time using Python at all (as it was also new to me), but I guess the grass is, in fact, always brighter on the other side. Or maybe it's always rustier? Who knows.
  • Still enjoying The Way of Kings a lot. Finally on part 3 after starting this book nearly 4 months ago. Highly recommend the graphic audio version if anyone else is an audio book fan (they also have a preview on their website which is awesome)
  • Wish me luck with my finals start next week.

by Chris Pinkney at Sat Apr 10 2021 01:41:38 GMT+0000 (Coordinated Universal Time)

Tuesday, April 6, 2021


Tony Vu

First attempt at PWA — Failure

First attempt at PWA — A Failure

I started my exploration in PWA having some prior knowledge about PWA with NextJS. It is always good to know where you want to start in everything. From my past exposure, I learned about next-pwa , an out of the box package to make a Next project become a PWA. It is advertised as “Zero Config PWA Plugin for Next.js” and that was what I wanted to reduce development time. I wish everything was as smooth as I always expected (but rarely happened) as the advertisement. Yes, it has minimal config if you don’t need to make it work with other plugins. I did not know about that until my first attempt failed badly because of my lack of understanding how to combine configs in next.config.js . Below was how I thought I did it right…

The issue with this is that you can only use module.exports once. However, I was not careful enough to check all pages before submit my commit and since it showed the install option on mobile browser, I thought the commit was good to go until Chris pointed out that the About page did not load. That was when I learned that you should not have two module.exports .

I did a bit of research on the internet and found next-compose , a package to compose plugins together. I was thinking “Great! Problem solved!”. I followed the instruction in the README, and came up with something quite organized.

But then the About page still did not load. It was quite frustrating up to a point that I had to take a break from it and informed the team that we might need to take a different approach.

When I felt more composured, I decided to take another look at the PR and tried to figure out what I did wrongly. It turned out that withMDX plugin is structured a little bit differently and the way I set it up with next-compose was not incorrected. Also, Chris mentioned that someone during the triage meeting suggested to use another better established package called next-compose-plugins to solve the issue I had. Combining all information I have from the internet and what required for next-compose-plugins to work, I ended up adding a custom babel config file and a custom webpack config file as well as installing a few dependencies. Everything seemed to be in order, just that I still did not work…My very first attempt with next-compose-plugins can be found here. At this point, I felt that I needed to figure this out because it bothered me so much that I could not allow myself to give up on it. And the series of trials and failures began…

This blog ended here to express my frustration in the first attempt with PWA. In the next blog, I will explain how I successfully implemented a solution that worked.

Thank you for reading.

Tony Vu.

by Tony Vu at Tue Apr 06 2021 23:10:45 GMT+0000 (Coordinated Universal Time)


Yuan-Hsi Lee

Prepare for telescope 2.0

This week, I've been working mostly with UI changes, including user accessibility and investigating an issue in landing page for UI 2.0.

User accessibility

Use accessibility is one thing that I want to focus on telescope 2.0 release. There are many rules to follow to improve our user accessibility, and I started with the obvious one, which is checking the color contrast and size of all our pages and components. I've found the links color in dark mode, contribution card in dark mode, search help, slack icon can be improved. Also, since I'm working with mostly color styling, I also start to update the colors in our dark and light mode palette for better management.

Royce will be working with me on user accessibility, looking forward to working with him!

Landing page for UI 2.0

In Pedro's UI 2.0 project, he also plans to modify our banner. I'm assigned the part of moving nav bar from banner to timeline. The goal is to remove the nav bar from the banner, but show the nav bar when the user scrolls to timeline (posts). I've considered different solutions so far,

  1. Move nav bar to timeline (posts) component
    Currently, nav bar is above

    so that we can always see it in the top-right corner of the page.

  2. Only display nav bar when the timeline component touches the top of the screen
    This one is challenging for me. I haven't worked with element positions with screen before, not sure how much time do I need to investigate it.

  3. Make the nav bar component invisible in banner
    If we don't want it to be showed on the banner area, why don't we make it invisible in that area? For example, make the navbar element "under" the banner element?

Developer experience

As mentioned in previous post(s), I've been updating documents to follow the changes in telescope (mostly microservice). I closed my old PR for adding a new, temporary document for microservice transition. Instead, I go back to existing documents, and update them. Start with environment-setup.md, I believe there are lots of documents can be updated.

by Yuan-Hsi Lee at Tue Apr 06 2021 04:10:59 GMT+0000 (Coordinated Universal Time)

Monday, April 5, 2021


David Humphrey

Content

I collaborate on content with a lot of different people.  As a software developer, almost all of my work is done in git and GitHub.  As a professor working at a large institution, most of it is done in Microsoft Office, email, or increasingly, in Teams.  Lately I've been thinking about how these various tools come to shape our sense of what content is, and define our relationship to it, and eventually, to one another.

When we use the word "content," we do so in a number of competing grammatical ways:

  1. She was content with the outcome.  Content is a state of satisfaction, of being at peace, a happiness and willingness to accept something for what it is.  Tied up in this is also a sense of acceptance, and perhaps an understanding that a deeper longing will not be met.  Content is adequate.
  2. He poured out the bottle's contents. Content is what is found within, it is everything included, the full amount of something, what is available.  To think of content as being too little or too much misses what it actually is.  Content is what is there, not what you do or think of it.

The book's table of contents tells me what is to be found within its pages, but my goals as a reader will ultimately define how contented I am with the its coverage of some topic.

As a collaborator using git, I am never done creating content.  I write in any number of styles, languages, and software applications, and in every case I am freed from the burden of perfection.  I am able to be human, always aware of my existence within in time, that this will not be my last chance to improve things.  While my many failings are always with me, accruing within .git/, they are both on display for all who would judge me, but also hidden in the constant flow of error and eradication of mistakes by new commits.

When I'm in git, content is what is found within: this repository, this release, this branch, this commit, this moment.  To experience content this way is to examine the sea, and look upon a wave, knowing that another follows.

As a collaborator using office software, I am terrified of my mistakes.  I read, re-read, and read again, hoping to uncover the typo, incorrect figure, or failed copy/paste that will undo my assignment, exam, grant application, or email.  I am forever cursed to be human, always aware that my imperfections cannot be separated from my good intentions.  My results are judged on the basis of the final document I must eventually submit.  Whether it is satisfactory or not remains to be seen, for it must be judged, accept or rejected, and not by me.

When I'm in office, content is how what I've made is received: the satisfaction of the reader, and their acceptance of what I've produced, the liklihood of my work being enough.  To experience content this way is to come face-to-face with something washed-up on the shore.

Moving between these two worlds is jarring.  I am the same person in both, but how my errors are viewed is very different.  In one, it is an expected part of the process of  improvement; in the other, it is a stumbling block, a cause for concern, and a failure.  

It won't surprise you to hear that my heart belongs to git.  I am now so used to embracing my mistakes, iterating and improving my work, and adopting a continuous-quality process, that to have to switch to the more common office-style undoes me.  It's a fascinating experience to witness my colleagues different approaches to me and my work, depending on which style they prefer.

I only know how to correct, not write correctly, and git, for all its flaws, fits me nicely, for all of mine.

by David Humphrey at Mon Apr 05 2021 18:47:26 GMT+0000 (Coordinated Universal Time)

Sunday, April 4, 2021


Abdulbasid Guled

DPS911 Blog #11: The importance of patience

First of all, thank you David for being there for me this week. I definitely needed that boost of confidence. Go read his stuff here. You won't regret it.

This week, I took a break from the microservice hell I was in and wrote a PR to create a Search Context for our search related props. You can find that PR here. I ended up coding the initial context in about 20 minutes which wasn't so bad. The real problem came with reviews...

So, I only got like about 2 reviews only for this PR. I wasn't expecting such a low turn-out, especially considering this was for a piece of the front-end which most of our group was working on. Probably because I did alot of testing, and I hated having to rebase my PRs so frequently, I merged the PR in myself. Suffice to say, while the github actions tests were all green, I re-introduced a bug that we had fixed months ago, which was the search bar making a request on every letter typed. You can imagine my surprised when I was working on some other work for other classes and got a message that it was broken.

I got cocky and impatient and it ending up biting me pretty good. I quickly went about searching for a fix, and it took an hour of looking at the code, but it was simple. I just needed to also include the textParam that we had as a context value so the search results component used that value instead of the actual text when making the network request. We have a useEffect that gets ran in the context everytime the text changes, so that's what was causing the bug. I went to file the PR and...

My remote master was ahead of all my other branches, and thus, it was bringing in merge commits and other commits from other students as well. Great, so now I have to fix my git history once and for all. This was occurring for a few days and it affects all my current PRs up including the PR to introduce the search service to the front-end (Once the search service is merged in), and the PR to increase the jest e2e timeout seconds so our github actions doesn't lag and cause a random failure for no reason. You can find both PRs here and here. In any case, it was a pretty demoralizing week, and as a result, I didn't really do much in the way of reviews.

One thing that the Covid-19 pandemic has done is ruin any sort of live discussions. I try and be active in slack by talking as much as possible on current work I'm doing, but it's been clear that the majority of the other students aren't as active. School and life in the home certainly play a factor, but when only a few people are talking at any given point, it really ends up begging the question, "Why am I on Slack at all?". If I just need to shut-up and do my work, what's the point of even being there? I guess it doesn't help that I'm one of the few people working on microservices, so any PR I put up is going to get very few reviews. Chris is also feeling the same way with his User service since we don't learn Firestore/Firebase in Seneca. I've tried to help alleviate that by learning as much of Firebase as I can on my free time in order to be able to provide some sort of help whenever possible, but I can certainly do more.

I hate rebasing so often, because it means that any PR I have up is either not getting reviewed enough, or it's blocked by another piece of work not in the repo yet and I have to wait. I was talking about my SearchContext PR during this week's triage to make it as simple as possible for others to review and yet, I find the lack of reviews very discouraging to be blunt. I think we have a great group of developers this year working on Telescope. We've done alot of work since January, and I wanna ask in this post that everyone step up and at least try to review PRs that they don't feel comfortable reviewing. Our microservices are just as essential as the 2.0 UI updates. Even if it's easier to review front-end code since it's mostly UI based, and even if CSS is one of my weakest points in web development, I STILL make the effort to try and review UI PRs, because at the end of the day, I still learn something valuable that can help make me better with CSS. Especially now with the github cli available, making it very easy to go into another person's PR and actively test it locally, we have the potential to do much better, and I know we can.

There's only 3 weeks left until 2.0 launches and I'm starting to realize that many of the things I wanted to work on will probably not happen. If it's possible for me to continue working on Telescope past DPS911, I'll take that opportunity. With this in mind, I've decided to drop work on the redis data loss feature. I left a comment discussing what a solution can be for any future developers working on this feature here. If I can continue working on this after this month, it'd be great. I feel like this solution would be most ideal at solving this redis data corruption issue. I'll be updating the posts service's health check path to also check Redis to make sure that works once Redis is ported over to Satellite, and then, there's the JWT related stuff I wanted to look over as well. I think I messed up by taking on way too many issues and so I plan on dropping a number of them in the coming days, not because I can't do them, but because there's not much time and it'd be unrealistic for me to look at every one of these issues considering the time.

With this in mind, that about raps up this post. Made on Saturday instead of Friday due to the Good Friday long weekend. We'll be back on track next week. Until then, stay tuned!

by Abdulbasid Guled at Sun Apr 04 2021 02:59:28 GMT+0000 (Coordinated Universal Time)

Saturday, April 3, 2021


Chris Pinkney

This Horrifying Force (The Desire To Merge)

​ The weekend crept around like a bad habit. I spent it working at my job and making progress on my two PRs which couldn't be landed in our Friday release due to an acute programming deficiency time and review constraints. Both PRs went through a rampant series of reviews before they could be landed, which I am consistently and eternally grateful towards. Telescope would look like a dog's breakfast were I put in charge of approving or disapproving code integrations. Plus it's always good to have someone better than you review your code:

​ But we'll get into the users micro-service stuff shortly. Switching gears to some Telescope maintainer work, I approved a PR which adds Portainer, a container management GUI (which I had explored a few days prior with Josue) which makes the Telescope admin's lives much easier. I used to host a lot of game servers when I was younger (still do) so I love to play around with software like this.

I also approved a small PR which fixes a typo we had in our CSS, and another fixing the way we display hyphens in Telescope users names. The latter PR had some pretty interesting discussion based around how to approach this issue, how should we display long names which contain hyphens? Should we break the name onto a newline, or split the name at the hyphen? Eventually we decided to simply split onto a newline after encountering a hyphen.

I suggested the use of Formik for our new user sign-up page PR. The sign-up page will eventually be sending post requests to the Users microservice to allow Seneca Students to create accounts and add their blog feeds/GitHub info to their profiles. Naturally it's imperative that we employ double-validation (a word which I totally didn't create just now), meaning we should be validating information that users can input in both the frontend and the backend.

I also lead our weekly triage meeting, with my co-pilot Royce. I think it went really well. I certainly didn't feel as nervous as I did the first time leading.

Users Microservice

​ One change of note to one of my PRs mentioned above, was an idea suggested by Head Wizard Davealdore to validate the query parameters used in the users micro-service even further. We currently have code in the Users microservice which parses each parameter passed in when executing a GET request to /?per_page=xxx&page=yyy to retrieve Telescope users. These values for the parameters (xxx and yyy above) can be between >= 1 and <= 100 for per_page, and >= 1 forpage:

const parsePerPage = req.query.per_page < 1 ? 100 : Math.min(parseInt(req.query.per_page, 10), 100);
const perPage = !parsePerPage || parsePerPage < 1 ? 100 : parsePerPage;

const parsePage = parseInt(req.query.page, 10);
const page = !parsePage || parsePage < 1 ? 1 : parsePage;

However, it was suggested to instead delegate some heavy lifting to Celebrate, a library we use in the backend as middleware to our routes, such that we don't have to entirely rely on weird hacky code like the above snippet. It looks something like this:

-    [Segments.QUERY]: {
-      per_page: Joi.number(),
-      page: Joi.number(),
+    [Segments.QUERY]: {
+      per_page: Joi.number().integer().min(1).max(100),
+      page: Joi.number().integer().min(1),

Not only does Celebrate now ensure that the parameters can ONLY be numbers (specifically integers), it ensures that they must be integers with a range between 1 and 100 for per_page and >= 1 for page. With this change also comes a much needed pruning of the spaghetti logic above, as since we're validating both page and per_page, we have no need to parse or specify the values, and the code can simply be changed to the follow:

-   const parsePerPage = req.query.per_page < 1 ? 100 : Math.min(parseInt(req.query.per_page, 10), 100);
-   const perPage = !parsePerPage || parsePerPage < 1 ? 100 : parsePerPage;
-
-   const parsePage = parseInt(req.query.page, 10);
-   const page = !parsePage || parsePage < 1 ? 1 : parsePage;
+   const { per_page: perPage, page } = req.query;

Awesome. I feel that every time I work with Celebrate I always come out feeling quite pleased with this library and with myself. When a JS library ups your self esteem you've probably been inside the house for too long. Sigh.

​ I also wanted to spend some time later in the week converting the unit tests in the Users microservice to e2e tests with Doc Josue, however a big PR from Head Wizard Davealdore was put up for review which changed a lot of the background and common files in the users microservice. Because of these changes I had to alter the order of my operations since a lot of files that I'd be touching are going to get changed after said PR lands. Thus I instead spent the night working towards finally landing my paginated PR instead, followed by a review to Davealdore's above mentioned PR. Also I think a certain head wizard needs a break... or a beer, or both, preferably both:

​ I also reviewed a PR about Jest, which added a watch mode to the e2e test runner. Constantly re-running tests was slowly driving me (further) into madness. I actually never thought that this had to be added, and instead figured it was just part of Jest and that I was, instead, being too lazy to google to run it.

This weekend I hope to finally get around to converting the users tests to proper e2e tests, submit more reviews to Dave's large PR, and ideally start working on adding our data to Redis as a cache between Firebase and Telescope. Happy Easter.

by Chris Pinkney at Sat Apr 03 2021 17:53:40 GMT+0000 (Coordinated Universal Time)


Royce Ayroso-Ong

Adhering to Accessibility Standards

Status report: cleaning up PRs and setting new goals

Photo by Robert Ruggiero on Unsplash

This week I spent most of my time preparing a pitch for Seneca’s “Pitch Nite”, where our team — among other teams who took part in similar challenge sets in Seneca’s 2021 Hackathon— would meet Sightline Innovation. To see everybody’s different pitches, designs, and demos for their proposed solutions was wonderful. Though with “Pitch Nite” long done, it was time to get back to work.

Tonight I’ve made the suggested changes to my last week’s PR regarding decorating each post with a class that matches the blog host platform. I’ve changed how the function is used and where it is called and I’m planning on dedicating the weekend to make sure that this lands before Monday since I also plan to create a second PR to fix the Blogspot issue right away.

As for my plans to improve Telescope’s accessibility, I’ve contacted Yuan (she’s sort of currently leading the charge) as to where I should begin— I don’t want to set out to fix something she’s already got under control. Moreover, I’ve been using Anton’s initial issue as a good place to start, reading the set industry accessibility standards and making note of what to look out for. For example, there is an entire list dedicated to making sure your website is readable (see here)— I will spend some time going through the list as I examine Telescope inside and out. As for now, I may try and take over this issue (#2001: Improve the visual break between posts in the timeline) to help Yuan with all the front-end improvements.

by Royce Ayroso-Ong at Sat Apr 03 2021 01:26:57 GMT+0000 (Coordinated Universal Time)

Thursday, April 1, 2021


Anton Biriukov

Weekly Telescope Podcast

Incremental improvements to the about page, first admin buttons, accessibility improvements, introducing Portainer, Google Search Console verification and getting started on the Jest Snapshot testing for our front-end are among my list of highlights for the past week.

Last week we’ve made a steady progress towards our UI 2.0 milestone. It has been very nice to see About page getting more and more polished by the efforts of Chris. It was a tough one, but we finally have a properly styled and responsive About page!

https://telescope.cdot.systems/about/
https://telescope.cdot.systems/about/

Yuan has done a very good job with improving the accessibility of links in the dark theme in her pull request. We have finally seen the introduction of Portainer (kudos to Josue) and got started on adding UI elements for admin functionality. From my side, it has been interesting to work on configuring Jest and creating the first snapshot test for Telescope’s front-end. The setup process gets really confusing due to a lack of standardization and poor documentation, but Dave was able to give me hand with it. He says it takes “decades of being stuck” to figure such things out…Anyhow, here is the code for the snapshot of the Logo component:

// Jest Snapshot v1, https://goo.gl/fbAQLP                                               exports[`renders correctly 1`] = `                       &lt;img                         alt="Telescope Logo"                         height={50}                         src="/logo.svg"                         width={50}                       /&gt;                       `;

Adding more snapshot tests is fairly easy and we should put our effort into increasing the coverage in order to avoid burnout and gain more confidence with using automated tools, such as Dependabot.

Another insightful thing that I have been exploring is the Google Search Console. Good news is Telescope is on Google and required pages have been crawled:

https://search.google.com/search-console

We can also see some statistics on how many times Telescope was mentioned on other URLs and what were the top linking sites:

https://search.google.com/search-console

Surprisingly or not, most of our search appearances came from the US:

https://search.google.com/search-console

However, there are also bad news…our old website seems to still be more popular then the new Telescope:

https://search.google.com/search-console

There is definitely a lot of insightful information available for us to analyze on Google Search Console and I think that we should not hesitate to explore our options there!

Summary

To summarize, here is the PRs that I worked on last week:

And the following is a list of PRs that I have reviewed:

by Anton Biriukov at Thu Apr 01 2021 22:32:09 GMT+0000 (Coordinated Universal Time)

Monday, March 29, 2021


Tony Vu

Dark Theme as an essential factor to developer’s success

This week, I was tasked to bring back the dark theme to Telescope. We had the dark theme before and all the functionalities to persist it as a user’s choice but since the UI was incomplete at that time and there were too many pieces required attentions, we decided to put a hold on shipping the dark theme.

It was a simple task, I just need to uncomment my previous code and ensure the ToggleThemeButton is styled the same as other navigation tabs with a tooltip on hover. Also, thanks to changing in the theme object, professor Dave recommended to change the background color to pure white as he believes grey is so old school and did not reflect our modernized design. What can I say? Your wish is my command! Now we have a very contrast themes with white for light theme and black for dark theme. I feel these are the most obvious choices when it comes to different theme modes.

One thing I noticed is that the dark theme did bring joys to at least a few of our contributors for example Anton and Josue. Some of them even mentioned that they will never go back to light theme again. From a UX perspective, one might feel having the option to choose the color theme that they preferred would give them more motivation to use the site. It is interesting to know that a minor thing like a dark theme could affect the success of an application.

Besides helping with the dark theme, I was also reading up about PWA on the internet to find what to do to make Telescope a PWA. And this would be my focus for the next blog post.

Thanks for reading!

Tony Vu.

by Tony Vu at Mon Mar 29 2021 19:55:13 GMT+0000 (Coordinated Universal Time)

Sunday, March 28, 2021


Royce Ayroso-Ong

Spending the Weekend Thinking

My possible dive into web accessibility and rethinking solutions

In response to last week, my pull request for centring table contents got merged but my pull request for adding a non-breaking hyphen needs revision. My slapped-on solution (born from trial and error) needs a bit more baking in the oven. While working on that PR I was solely focused on just implementing a way to replace the normal hyphen with a non-breaking one as the PR title suggests. However, this direction essentially ignored any other way to solve the original issue — that my last name was being cut down the middle.

Instead of using JavaScript to handle this, it was brought up that this could also potentially be done with CSS (linked resources here). This way, not only does this solution help avoid having my last name cut off, but also gives us room for longer author names.

After a bit of research, I found an eerily similar issue here, they were having the same dilemma of trying to figure out how to not break at the hyphen. After trying their proposed solutions I can’t get similar results to our solution with JavaScript. That being said, I think I can improve the Javascript solution by first using Unicode instead of the string literal for the non-breaking hyphen, this will allow future maintainers to quickly understand what the code is doing (before it just looked like a redundant line of code to replace all hyphens with a hyphen). Furthermore, part of the problem that contributed to my name being cut off was because the container for the author's names was simply too small given the amount of extra space on the screen. So, what I did was added a breakpoint that gave larger screens more width for the author name. Hopefully, this solution combines the best of both ideas to accommodate long names that may or may not include hyphens in it (like mine).

Decorating Each Post With a Class That Matches the Blog Host Platform

I’ve been working on this issue for the better half of last week, and the main problem I encountered earlier was that I couldn’t fully extract the blogging platform name:

// Old solution
const blogClassName = new URL(post.url).hostname.replace('.', '-');
// What we want: medium-com | What we got: roycedev-medium.com

It’s been a long time since I did regex back in my first year Unix course — so I brushed up and did some research. One of the solutions that I found was from this StackOverflow post which gave exactly the intended result:

function domain_from_url(url) {
    var result
    var match
    if (match = url.match(/^(?:https?:\/\/)?(?:[^@\n]+@)?(?:www\.)?([^:\/\n\?\=]+)/im)) {
        result = match[1]
        if (match = result.match(/^[^\.]+\.(.+\..+)$/)) {
            result = match[1]
        }
    }
    return result
}

However, you can’t just copy and paste a solution from the web and use it in a commercial product. I studied what the code did, and made almost a dozen modifications which came out like so:

const parseHostName = (hostname: string) => {
  var matches = hostname.match(new RegExp(/^[^\.]+\.(.+\..+)$/));
  var result = matches ? matches[1] : hostname;
  result = ' ' + result.replace('.', '-');
  return result;
};
const hostName = post ? new URL(post.url).hostname : '';
const blogClassName = parseHostName(hostName);

This produces the same result, yet they are completely different in structure. The way I did this was I combined the old solution which gave a URL that needed to be further parsed and simply made a function to do exactly that and extract the blogging platform name. What we are left with is a solution that decorates each post with the blogging platform name like so:

This will allow Telescope front-end devs to specifically style posts from certain blogging platforms (I’m looking at you BlogSpot).

Future Plans

Telescope 2.0 is coming up fast and I want to do more to contribute and so far I’ve been thinking that I want to join the fight to make Telescope more accessible. This came about when helping my grandparents (who I live with) surf the web since they struggle with certain apps/websites — I realized that things obvious to me (I grew up with the internet by my side) are not so obvious to them. Sometimes designers go for what looks modern/minimal/aesthetic but miss the mark on intuitive design and usability by assuming all users are more or less tech-savvy.

by Royce Ayroso-Ong at Sun Mar 28 2021 18:45:07 GMT+0000 (Coordinated Universal Time)


Yuan-Hsi Lee

User Accessibility and Developer Experience

Telescope 1.9 release is shipped! Hooray!

In this week, I gain some new experience in user experience and developer experience. I'll explain them in this post.

UX

As discussed in the last post, Pedro and I want to handle the title issue. The old title has big font size which causes the title wrapped easily, and will need to expand the title to 2 lines, which is what we want to avoid.

In this PR, I shrunk the title size to make titles showed in one line (in most cases), and with smaller space used.

Before:

After:

This PR also solved the letter-spacing issue on mobile

Before:

After:

The other 2 PRs I want to mention is to improve user accessibility. We have an amazing dark mode to switch, but some font/element colors does not meet WCAG AAA rating, or even AA level.

Our old color choosing for links in dark mode looks like this,

The gray one is visited link and the light blue one is unvisited link. The gray one is hard to read, but when I check the contrast ratio, the blue one also has AA rating instead of AAA.

There are many colors I can choose to meet the required contrast ratio. However, I want it to be more consistent with light mode (the default mode). In light mode, unvisited link has blue color, and visited link has the color like dark red-violet.

Therefore, I stick with blue for unvisited link in dark mode (but make it brighter to meet AAA rating) and change gray to a pale pink with a hint of purple.

The other PR is to change the search bar color in dark mode. There is no config for dark mode hovered search bar. Therefore, the color is using the same one with light mode. I changed the color based on the same design pattern with light mode (same color with background but use border to tell apart).

These couple of weeks gave me lots of chance to work with user accessibility and I enjoy it. I took over another user accessibility issue and will be discussing with other developers to file more specific improvement issues.

DX

When I was shipping this PR to bring back our admin button in UI2.0, I found that the old method to run login server does not work. The reason is that we're in the transition to change to microservice. There are easier ways to start the needed services separately.

After talked to professor Dave, he suggested me to write a new document to help other developers handling these environment setups. (Since this is the second time I asked him about it)

In this PR, I gather different scenarios and explain how to do env setup and explain why we do that. It is challenging for me since I need to read other people's code and to understand. This PR is still in progress, I hope I can get more people to review it and get it merged!

by Yuan-Hsi Lee at Sun Mar 28 2021 03:36:24 GMT+0000 (Coordinated Universal Time)


Ray Gervais

Getting Started with Neovim’s LSP

Or how to alienate yourself in the world of VS Code In my previous exploits around the terminal-based workflow -which, you can read about more here, I had setup a workflow with tmux, alacritty, and vim to great success for my average day tasks. Over the past while, I’ve wondered how I could further improve the setup and remove the context-switch which often occurred when working with other tooling such as VS Code.

by Ray Gervais at Sun Mar 28 2021 00:00:00 GMT+0000 (Coordinated Universal Time)

Saturday, March 27, 2021


Chris Pinkney

Hey Sailor

I started off the week approving a simple but much needed PR from Yuan which shunk the title font size and added a link to the author's blog (my favourite part.) I then went onto approve YET ANOTHER pr from Miss Lee (who has been making some nice additions to our front end apparently) which re-adds our much needed admin buttons to our front end.

Next I set my sights on the ever polite Metropass (if that is his real name). I reviewed Mo's really cool PR and left my thoughts for him to digest. I had suggested that in addition to hardcoding how long we specify our Cache ages (i.e. how long the browser should cache a piece of data vs requesting a new piece of data all over again) the developer could alternatively pass a specific value to specify how long they want to cache their stuff (the ever technical word.)

​ The PR also reminded me of how switch cases finally got added to Python, I remember Googling how to do them in Python during OSD600 while working on link checker program, and since Python (at the time) didn't have them, I had to instead use if/else etc. This is kind of an ugly change if you ask me but not an entirely unwanted one.

I also threw some thoughts about a PR here, and finally I also reviewed Tony's PWA PR.

​ I remember talking to Tony near the beginning of the semester and us both agreeing to work on the PWA together (though we have since diverged paths massively since I'm currently obsessed with microservices) so I'm glad to actually see it being worked on. I have to say I'm really amazed at how simple it seemed to set this up. For some reason I was picturing doing something like React Native to get this work. Nope, simply import a library and Bob's you're uncle. Amazing. I even tested it on my phone and it worked beautifully. I was in shock, truly.

​ Finally, I gave my comrade Ilya a brief lesson on microservices (and satellite) since he's taking over managing a microservice. I'm really excited to see where it'll head to because I can finally talk and review microservices after my experience working on one for the last few weeks. Speaking of microservice...

Feeling undeservedly accomplished for now, I went back to touching (finishing?) up the Users Microservice. I had at least two goals that I wanted to accomplish this week: properly paginate the GET route, and fully set up the users microservice for prod. First thing was first so let's dive in:

I started off by working on paginating (a fancy word for saying "give me only a slice of the cake instead of the whole cake") the GET route for the microservice. After working on the issue for a while I stumbled upon a major issue: How can I request only n number of records and know where to start at when I don't have a point of reference? I can't just pump gas into my car and know when to stop, I need some sort of point of reference. Similarly I can't just request 20 records from the DB without saying where to start and stop from. How would the query know which 20 I'm requesting? The first 20? The second? The third? Etc. Can't I to request 1 page of 20 records, another page of the following 20, and a third page of another 20 records?

​ Generally in these situations there's something called an offset. I can request 20 records on the 5th page and simply offset which records I want by 20 * 5, thus ensuring that I get records 100-120. But not in Firestore! Another gotcya that's slowly pushing my away from the database that I once loved. The problem with this situation is that the offset method in Firestore requests ALL records in the DB as opposed to the few that I request. This is a problem when dealing with massive databases. If I have a database with 100,000 records, and I request 20, why should I pay for the bandwidth of requesting 100,000? (Probably so Google can charge you for it, but that's neither here nor there.)

​ I contacted Sage Dave and asked for some advice which left both of us in a stump. The solution I came up with is simply start from user 0 and work my way up from that when requesting n users. If a user has an id of 0, I can request 10 users on page 1 and 10 users on page 2, and since I know my starting point of reference I'll easily be able to request the first 20 users.

​ I finished my PR and threw it up for review. As with most of my code I'm getting good reviews with a lot of language-based semantic nitpicks. JS is not my forte. I mean, I have no forte, but if I did JS would not be it. I'm really starting to enjoy it though.

Next up is making sure that the Users Microservice is ready to be deployed on production. Since our code lives inside of Docker (with traffic managed by Traefik) I have to ensure that my microservice can both receive and send signals to the other microservices as required. The complicated part of this PR is differentiating between what environment the code is currently running in, and how to respond accordingly as a result.

​ When the microservice is running in dev mode, we have ensure that we're using the Firebase emulator and not the actual Firebase db (as to not incur usage charges when we're simply fixing code or adding features). How do you tell which code to run when though? This is a minor problem I struggled with a lot in this PR (I think mostly it's because my knowledge of Docker, Traefik, dev vs prod, is flaky at best). But my main challenge I faced with this PR was getting the emulator to work inside of Docker's dev environment (there's a lot of minute details and things to keep in mind with this issue, so I'll try to keep this brief.)

​ There's currently two dev versions to this microservice, a Docker version and a local version. Think of them as one in the same entity, just with a different coat of pain. The local version works flawlessly, so why doesn't the Docker version? I simply am not able to communicate with my microservice via Docker. WHY? It's maddening! I felt my sanity slip away while working on it. I explored every Google hit I could think of before relenting and asking for help from Doc Josue. After about 2 hours of us trying to figure this out, we came across the extremely obvious (in hindsight) solution.

​ You need a few things to ensure that the Firebase emulator functions properly:

  1. You have to make sure that you specify a port and address in the firebase.json file.

  2. You have to make sure that the projectIds match for both the emulator and the firebase config file.

  3. You have to make sure that the FIRESTORE_EMULATOR_HOST environment variable is PROPERLY pointing to the emulated Firebase instance in question.

​ If you haven't guessed it, I was declaring the Docker address incorrectly: FIRESTORE_EMULATOR_HOST=localhost:8088 vs FIRESTORE_EMULATOR_HOST=firebase:8088. And it makes perfect sense too when you think about it. localhost does not exist to other Docker containers, thus saying "I want you (localhost:6666) to connect to Firebase at localhost:8088" is not applicable. localhost:8088 does not exist from one container to the next. Stupid. Very stupid of me. All we had to do was specify the Docker container's network address (via firebase:8088) and we were back in business. We also briefly tested deploying the microservice to prod using a real Firestore instance and I'm happy to report that everything works as expected!

Both PRs ended up taking much longer and being way more involved that I had thought either of them would be. I really happy that I stuck with it and managed to work through several blockers I had. I genuinely could not have done it without Doc Josue and Sage Dave as both issues required more pairs of eyes to finally figure out. Kudos to both of them. 🍻🍻🍻

In more personal news:

  • Currently listening to local Windsor band Woods of Ypres

  • I'm very excited that it's getting warming and I can finally start my garden up again. If anyone wants to request a specific fruit or veggie to grow now is the time, simply bring a 6 pack to share when you come to pick up the harvest. That or review my PRs. Preferably the former.
  • I finally got around to watching some of Dirty Money's season 2. It's just as good as season 1 so far.

by Chris Pinkney at Sat Mar 27 2021 02:43:01 GMT+0000 (Coordinated Universal Time)


Abdulbasid Guled

DPS911 Blog #10: The importance of proper code review

I know I'm supposed to work on the microservices, but these frontend code reviews are killing me... >.>

So Version 1.9.1 of Telescope is now released. 1.9 had a hiccup so we had to make a hotfix release. As long as the work is done, that's what matters.

I spent most of the week reviewing front-end code. There was alot of PRs in for 2.0 design fixes so I needed to look at those. Front-end code has always been something I enjoyed looking at and working with, and although I don't get much of a chance since many in our group prefer working on the front-end, reviewing them is always a breath of fresh air for myself. I might even pick up a smaller front-end issue in next week's meeting.

Anyway, here's a general list of PRs that I reviewed this week:

Probably my biggest list of reviewed PRs yet. As many of my issues require me to look into parts of Telescope I haven't looked at before, I used this time to look into PRs that others made so that we can get them in. Sometimes, I mess up and others have to correct me, but that's why we always have 2 reviewers required to approve in order to merge (Looking at you, 2022).

In terms of work I did, I made a PR to switched the SearchResults component to use the microservice urls instead of the old telescope backend url. This was all fine and dandy until....

So uhh, the SearchResults component makes a query to elastic-search. This will return the query and other parameters that the component needs in order to display it's results. I made the mistake of using the posts microservice url, which doesn't have a route that returns though queries. As a result, nothing shows up. We have another PR that adds the search microservice up, but the owner, Ray, has been slow with updating it so I was tasked with continuing his work, which is now my priority. Once that's merged in, we can add it to production, and switched the frontend to using that url instead of the posts service url. You can find the PR here.

I think the main thing this week is realizing how much more work I'll need to do in order to get better with performing code reviews. Looking at this PR for example, I blocked the PR from progress because there wasn't enough adequate testing of the PR to prove that it can work in all situations. Another block from David was more because the way it was being done was not the best solution in general. A different reason for this but one that probably rings more true than my reasoning because of the evidence to back up his point. This piece in particular made me think alot,

"However, this breaks how hyphenation works on the web. Hyphens are supposed to provide wrap opportunities to the browser, and removing that seems wrong.

I would expand the width of this container. We have more room on the page, why limit it to such a narrow region?"

Is the code working good enough? Or should we really question the way the code is written? There are good solutions and there are bad solutions. Both will work for software, but one is easier to maintain and makes more sense for developers while another solution does neither. It's not something I consider enough when reviewing front-end PRs and so it's something I need to really work on more.

In any case, that about sums up this blog post. Only 2 more releases left until the 2.0 release. Big stuff are going to be coming. We got 4 weeks left to get all this stuff in on time. Let's get to work! Until next time, stay tuned!

by Abdulbasid Guled at Sat Mar 27 2021 02:41:48 GMT+0000 (Coordinated Universal Time)

Friday, March 26, 2021


Nesa Bertanico

My Lovely Experience with Seneca’s Digital Health Hackathon

This year, Seneca launched their yearly Hackathon and here are the 8 different challenge sets on which everyone can join and pick:

Royce and I teamed up to create Team AREN and we picked the challenge set: Patient Data Consent System.

A situation & role-based data access system that makes it easy for medical professionals to gain access to patient data but also preserves confidentiality and privacy of patients. Patients can give consent to share only what they want to share and medical staff can only gain access to what they need and what was released by the patient. In the event of an emergency, all critical data for treatment must be released to medical staff. Relevant technology that could be used to secure the system and protect data should be considered (e.g., blockchain). Mentorship will be provided by sponsors to aid in solution creation.

Seneca: Digital Health Hackathon

First Day

I learned a very valuable lesson this day, before starting any project a good programmer should identify who are the stakeholders and what do they value the most. It is very important to begin any project with system analysis and a diagram to capture requirements and business analysis to understand what the business needs. Remember that at the end of the day, you are trying to ‘sell’ a program.

As my team failed to identify our stakeholders and their requirements, we did not impress our judges on the first meeting with our first design.

Luckily for us, we are a team of fast learners and we cope super quick. Our good judges explained to us what the application’s functionalities are and right after the meeting we have a new solid idea.

Second Meeting

We presented our solid idea and the judges liked it! They pointed out our mistakes and which area of the application can improved upon. Royce and I had so much ideas for the application so we had to brainstorm each idea before adding and implementing them into our application. I will confidently say that communication is truly the key. First, we had a good and fruitful discussion with our stakeholders laid out the foundation and the blueprint of the application. Second, a 1-on-1 brainstorming with the team formed a solid idea and implementation. Overall, we are able to design a logical and functioning application.

Final Countdown

Is it really the final countdown or should I say final 30 hours of staying awake to work on this project. Deprived of sleep, team AREN powered through the creation of this amazing application: (Click here to watch our 5 min video presentation)

After the submission of the final project, we slept for 4 hours to get up for the winner announcement.

After nail biting moment, the host was calling out for the challenge set winner for patient data consent system and they called out Team AREN! I was literally screaming my lungs out.

I was so happy because this was the very first ever Hackathon that I compete in, and we got so lucky.

Results

After screaming, I went to eat breakfast at 3pm to take the most peaceful and satisfying nap I have ever had.

I got the amazing hackathon Tshirt and this amazing badge:

I will definitely join any upcoming hackathon! I am just so excited to work with other people, learn and experience how to understand my stakeholders (I strongly believe that this is a important for software developers because we will be maintaining, modifying, or creating different applications throughout our work life.), learn about new technologies, and how to have fun as a Software Developer.

by Nesa Bertanico at Fri Mar 26 2021 05:56:33 GMT+0000 (Coordinated Universal Time)

Sunday, March 21, 2021


Yuan-Hsi Lee

What are users thinking

This week, Pedro discussed his thoughts with me about the title design in telescope. He wants to make the title section smaller.

This is how telescope handling the post title,

The title stick to the top when you scroll up, so that you can always see the title while you keep scrolling down and reading the post.
If the title length is too long, we wrapped it with ...; and it can be expanded after user click the title.

However, once the title is expanded, the title section is twice big. It'll occupy too much space of the screen.

It's smart to find out this issue. I have little knowledge of user experience, so I never figure this out. This is a great chance for me to dig into this field.

The solution Pedro came up is to dynamically adjust font size. In the other word, when the title overflowed, the font-size will be adjusted to be smaller and able to fit the div element.

We've tried different approaches to make it work, but none of them came with an expected result. Therefore, we were thinking removing the sticky title feature. As Dave suggested, when we're making changes on the UI design, we should always consider user's "habit". For example, in other similar kinds of websites, how do they design this feature? Users usually get used to one kind of design in a specific type of website. There, browsing other blog-post websites about how they handle a long title is a good way to start.

In medium and dev.to, there is no sticky title design. The title goes to the second or the third line if it's too long. However, it's not the same as our telescope. We have only one timeline, gathering all the posts (the whole post, without "read more" feature) from different authors.

In the meeting, some team members said that they don't really need the title once they start to read one post; but some do. The team came up with a solution which is to make the font-size of title smaller so that in most cases, the title will remain one line. Our title font-size is about 3 times bigger than the content font-size. To compare with some similar websites, they're doing much smaller size for their title.

Better? Can you still tell this is a title of the post?

Biweekly release makes us keep shipping changes. Sometimes, there is no perfect solution for one issue. But, shipping changes in each release makes our project better and better, even if it's just a small change in every release.

by Yuan-Hsi Lee at Sun Mar 21 2021 00:26:46 GMT+0000 (Coordinated Universal Time)

Saturday, March 20, 2021


Anton Biriukov

Search Engine Optimization with Next.js

In this article, I will cover four steps that will help improve your website SEO.

Photo by Marten Newhall on Unsplash

When it comes to building any modern website, you will most definitely have to work on the Search Engine Optimization for it, because you want other people to be able to find your website and utilize it. Let’s take a closer look at what it is and how to make it work.

Search Engine Optimization is a term used to describe the process of making your website better for search engines. In order for them to be able to provide better and faster search results, they use crawlers — automated software which:

  • searches (‘crawls’) the internet constantly for new or updated web pages
  • indexes those websites — saves their content and location

There is a number of search engines available, with the most popular being Google, Bing, and Yandex. In this article, we will mostly focus on Google, which contributes to over 90% of all search requests on the web. Considering this number, just making sure your website is properly indexed by Google would already be a big win and should definitely be the first thing on the to-do list.

Verify Your Website (Domain)

Google provides a dedicated console to manage and review the SEO performance of your website. It is quite a powerful tool, which allows you to collect analytics and find ways to improve your SEO. The first step to start using this console is to verify your website in it:

It is fairly simple to use and provides all necessary instructions. Once verified, you will be able to access a variety of tools.

Enable Crawlers

Firstly, it is important to make sure that search engines’ crawlers are able to access your website. One of the most widely used ways to do so is with the robots.txt. Through this file, owners of a website can specify which crawlers are permitted to look for and index which pages. You can get more information about it on the official website or in this guide by Google. Ultimately, it takes the following form:

# Specify allowed crawlers (e.g Googlebot, Slurp, Yandex)
User-agent: *
# Specify which pages the above-mentioned engine should crawl
Allow: /
# Specify which pages the above-mentioned engine should not crawl
Disallow: /search
# Specify how often should crawlers perform searches for new/updated # pages on your domain (in seconds)
Crawl-delay: 1

This file should sit in the root directory of your website. In Next.js, the ./public folder It is important to note that although most crawlers will follow the instruction given in this file, it does not prevent them from crawling the pages if they would want to. If you wish to keep certain pages private, you should consider password-protecting them.

In fact, most popular websites will have the robots.txt file. For example, you can take a look at https://twitter.com/robots.txt, https://www.google.com/robots.txt or https://github.com/robots.txt.

Create Sitemap

A sitemap is a file that essentially contains a list of all of the pages on your website. Google provides a comprehensive overview of it in their guide. In order to generate a sitemap for our Next.js website, we need to consider what types of routes we have (static, dynamic). We also need to decide how often do we want to update it or which events should trigger the update. Once generated in .xml format, we need to compress it and store it in the root directory of the website (./public folder for Next.js apps).

Hyouk has a great guide on how to implement sitemap generation for Next.js website. In case you use GitHub to store your project, he also covers how to set up GitHub Actions to trigger new sitemap generation on each deployment to master the branch.

Essentially, you can configure CI/CD in a way that works best for you. For instance, you could also update the sitemap on every new release. Lastly, you Hyouk also provides an easy way to poke Google to tell it to re-index your website again:

$ curl http://google.com/ping?sitemap=http://website.com/sitemap.xml

Lastly, it is also a good practice to add a link to the sitemap file in the robots.txt file:

# Sitemap Link
Sitemap: https://twitter.com/sitemap.xml

Generate Meta Tags

Meta tags are used to specify information about the authors of the website, site name, description, page title, keywords and more. Some of them should be assigned on a page-to-page basis, while some should be assigned globally. In Next.js, such attributes should be specified in ./pages/_document.tsx file. Below is an example of global attributes and a link to the corresponding file for the Telescope project.

&lt;meta name="description" content="Description of your website" /&gt;                                 &lt;meta name="author" content="Author's name" /&gt;                                 &lt;meta name="keywords" content="List, of, keywords" /&gt;                                 &lt;meta name="application-name" content="Application name" /&gt;

The canonical link allows you to specify the canonical URL for each page on your website and should be used on a page-to-page basis. For example, you may have development, testing and production environments deployed to https://dev.your-website.com, https://test.your-website.com, and https://your-website.com correspondingly. In this case, you want to tell crawlers that all identical routes under these domains should be treated as duplicate with the production one being canonical. In Next.js this link should be placed in the <head> tag of each page, which ./pages/index.tsx file would be best for:

<head>
  <link rel="canonical" href="https://dev.your-website.com" />
</head>

Social meta tags provide you with a great way to enrich links to your website posted on social media websites or forwarded in private messages. There is a number of markup tag systems used, with the most common ones being Facebook’s Open Graph and Twitter. Essentially this protocols allow you to specify information such as web page title, description, image, etc. to enrich the link to your website with. See this file from Telescope project for reference. In short, you can add the following to the <head> in your ./pages/index.tsx:

<meta property="og:url" content={currentUrl} />                             <meta property="og:title" content={pageTitle} />                             <meta name="twitter:title" content={pageTitle} />

Some of them should be assigned on a page-to-page basis (like above ones), while some should be assigned globally. For instance, on Telescope we use the following tags in ./pages/_document.tsx:

{/* Facebook's Open Graph */}
<meta property="og:type" content="website" />                                 <meta property="og:site_name" content={title} />                                 <meta property="og:description" content={description} />                                 <meta property="og:image" content={image} />                                 <meta property="og:image:alt" content={imageAlt} />                                 <meta property="og:locale" content="en_CA" />
{/* Twitter */}
<meta name="twitter:card" content="summary_large_image" />                                 <meta name="twitter:description" content={description} />                                 <meta name="twitter:image" content={image} />                                 <meta name="twitter:image:alt" content={imageAlt} />

Once specified and deployed, you can verify these tags using Facebook Sharing Debugger and Twitter Card Validator:

Facebook Sharing Debugger
Twitter Card Validator

If your website also has accounts on Facebook or Twitter, you can also link those by using twitter:site and fb:app_id. See https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup and https://developers.facebook.com/docs/sharing/webmasters/ for official details.

by Anton Biriukov at Sat Mar 20 2021 04:17:47 GMT+0000 (Coordinated Universal Time)


Nesa Bertanico

CoroVi App

Created a cross-platform Covid19 mobile application written in Xamarin.Forms with SQLite database.

App idea and functionality

This app tracks the number of cases, recovered, and deaths from Corona.Lmao.Ninja API from a Global perspective or from a certain country.

These are the 3 major pages of the application:

  • HomePage – this is where data from the API will be displayed. The user will be able to check the death, recovery, and cases from a Global perspective or from a certain country.
    • Users can enter any country name they want to see to display their total cases, today cases, total recovered, today recovered, total deaths, and today deaths.
    • There is a list view that contains all countries listed with their cases, recovered, and deaths.
  • SelfAssesmentPage – this is where the SQLite will be used. The users will be asked to do a self assessment if they have COVID or not. The questions I used are from Ontario Self-Assessment
    • Users can start assessment where they will be asked if they are experiencing any of these:
      • Fever and/or chills
      • Cough or barking cough (croup)
      • Shortness of breath
      • Sore throat
      • Difficulty swallowing
      • Runny or stuffy/congested nose
      • Decrease or loss of taste or smell
      • Pink eye
      • Headache
      • Digestive issues like nausea/vomiting, diarrhea, stomach pain
      • Muscle aches
      • Extreme tiredness
    • All of the results will then be saved into the SQLite DB with the time taken.
  • AccountPage – this is where the users are able to see their status, if they have COVID or not from the saved SQLite DB
    • The users can do following manipulations of the SQLite data:
      • View more details
      • Update the existing assessment data
      • Remove the assessment data

Web Services Used

I used the API from:

  • Corona.lmao ALL : I used this API website to extract values to track the total cases, recovered, and deaths and display them on the application. 
  • Corona.lmao COUNTRIES + “any country name”: I used this API to extract values to track the total cases, recovered, and deaths of a SPECIFIC country name and display them on the application. I also used this to gather all the data from ALL countries to be displayed in a list view.

To be able to extract the data I have this Class to hold the data:

public class CountriesClass    {
        public string country { get; set; }
        public int cases { get; set; }
        public int todayCases { get; set; }
        public int deaths { get; set; }
        public int todayDeaths { get; set; }
        public int recovered { get; set; }
        public int todayRecovered { get; set; }

        public CountriesClass() { }    }

And then I have this function that takes in the corona.lmao.ninja link and parse it into a string and then to be returned as a list all through the help of sonConvert.DeserializeObject from NewtonSoft.

private string url_allCountry = “https://corona.lmao.ninja/v3/covid-19/countries“;

public async Task<List<CountriesClass>> GetCountriesCovid()        {
            try {
                HttpResponseMessage res = await client.GetAsync(url_allCountry);
                if (res.StatusCode == HttpStatusCode.NotFound || res.StatusCode == HttpStatusCode.ServiceUnavailable)
                    return new List<CountriesClass>();
                try {
                    var response = await client.GetStringAsync(url_allCountry);
                    return JsonConvert.DeserializeObject<List<CountriesClass>>(response);
                } catch (Exception e){
                    Console.WriteLine(e.Message);
                    return null;
                }
            }
            catch (Exception e)
            {
                Console.WriteLine(e.Message);
               return null;
            }
        }

Then later in my HomePage.xaml.cs I bind my data from the url above:

            allCountries.ItemsSource = null;
            var list = await summaryNetworkingManager.GetCountriesCovid();
            summary_list = new ObservableCollection<CountriesClass>(list);
            allCountries.ItemsSource = summary_list;

Then I display the Binded summary_list in from my HomePage.xaml

<ListView x:Name=”allCountries” HasUnevenRows=”True”>
                    <ListView.ItemTemplate>
                        <DataTemplate>
                            <ViewCell>
                                <StackLayout>
                                    <StackLayout Orientation=”Vertical” Padding=”2″>
                                        <StackLayout HorizontalOptions=”CenterAndExpand”>
                                            <Label Text=”{Binding country}” TextColor=”#0A345E” FontAttributes=”Bold” FontSize=”Medium”/>
                                        </StackLayout>
                                    </StackLayout>
                                    <StackLayout VerticalOptions=”Start” Orientation=”Horizontal”>
                                        <yummy:PancakeView BackgroundColor=”#68CAD7″ CornerRadius=”15,5,15,40″ Margin=”15,0,0,5″ HorizontalOptions=”FillAndExpand”>
                                            <StackLayout Margin=”25,5,0,5″>
                                                <Label Text=”{Binding cases, StringFormat='{0:n0}’}” FontSize=”Small” FontAttributes=”Bold” TextColor=”White”/>
                                                <Label Text=”Total Cases” FontSize=”Micro” TextColor=”White” Margin=”-1,-7,-1,-1″/>
                                            </StackLayout>
                                        </yummy:PancakeView>
                                        <yummy:PancakeView BackgroundColor=”#AABBF3″ CornerRadius=”40,5,5,40″ Margin=”0,0,0,5″ HorizontalOptions=”FillAndExpand”>
                                            <StackLayout Margin=”25,5,0,5″>
                                                <Label Text=”{Binding recovered, StringFormat='{0:n0}’}” FontSize=”Small” FontAttributes=”Bold” TextColor=”White”/>
                                                <Label Text=”Total Recovered” FontSize=”Micro” TextColor=”White” Margin=”-1,-7,-1,-1″/>
                                            </StackLayout>
                                        </yummy:PancakeView>
                                        <yummy:PancakeView BackgroundColor=”#FB9C80″ CornerRadius=”40,15,5,15″ Margin=”0,0,15,5″ HorizontalOptions=”FillAndExpand”>
                                            <StackLayout Margin=”25,5,0,5″>
                                                <Label Text=”{Binding deaths, StringFormat='{0:n0}’}” FontSize=”Small” FontAttributes=”Bold” TextColor=”White”/>
                                                <Label Text=”Total Deaths” FontSize=”Micro” TextColor=”White” Margin=”-1,-7,-1,-1″/>
                                            </StackLayout>
                                        </yummy:PancakeView>
                                    </StackLayout>
                                </StackLayout>
                            </ViewCell>
                        </DataTemplate>
                    </ListView.ItemTemplate>
                </ListView>

SQLite DB Data

CoroVi app stores data into the SQLite DB when a user submits an assessment form.

My SQLite holds 12 Booleans for each question to be stored, 12 strings that hold results of each question and a Datetime to store the time of the assessment taken. 

Here’s an example of my Assessment Class

 public class Assessment 
    {
        public event PropertyChangedEventHandler PropertyChanged;

        [PrimaryKey, AutoIncrement]
        public int Id { get; set; }
 
        public DateTime dateTaken { set; get; } 

        public bool bq1 { set; get; }

        public string sb1 {
            get {
                if (bq1) return “You have fever and/or chills”;
                else return “You have NO fever and/or chills”;
            }
            set { }
        }

This is how we insert a new item in the db:

public async void startSelfAssessment(object sender, EventArgs e) {

            Assessment newAssessment = await AssessmentManager.InputBox(this.Navigation, null);
            if (newAssessment != null)
            { 
                allAssesments.Add(newAssessment); 
                dbModel2.insertNewToDo(newAssessment); 
            }
        }

This is how we Update existing item in the db

public async void updateDB(object sender, EventArgs e)
        {
            var toUpdate = ((sender as Button).CommandParameter as Assessment);
            var updatedTask = await AssessmentManager.InputBox(this.Navigation, toUpdate);
            if (updatedTask != null)
            {
                SelfcarePage.dbModel2.updateTask(updatedTask);
            }
        }

This is how we remove existing item in the db

public void deleteFromDB(object sender, EventArgs e)
        {
            var toDelete = ((sender as Button).CommandParameter as Assessment);
            SelfcarePage.allAssesments.Remove(toDelete);
            SelfcarePage.dbModel2.deleteTask(toDelete);
        }

This is how we display all the items from the db

protected async override void OnAppearing()
        {
            SelfcarePage.allAssesments = await SelfcarePage.dbModel2.CreateTable();
            allAssesmentTable.ItemsSource = SelfcarePage.allAssesments;
            base.OnAppearing();
        }

Outcome

I learned how to implement a TabbedPage, use different kinds of UI components, parse JSON from a web service, deserialize JSON using NewtonSoft, and store data locally through SQLite DB.

At the start of the pandemic my dad told me to make a mobile app to track COVID updates with a simple self-assessment questionnaire, and I made it happen! I installed this application in his phone so he can use it. He is very proud of me and that made my heart skip a beat.

I am so happy with the end result of this project, aside from learning Xamarin.Forms I also made my dad proud.

Click here for the GitHub link!

Thoughts

This is my first time touching Xamarin.Forms and C#. I must say that I am a bit sad that Microsoft announced Xamarin.Forms will be deprecated in November of 2021 because they are releasing a new .Net based product called MAUI(Multiform App User Interface). Maybe I should be excited for MAUI instead?

We are all truly blessed to have the chance to enjoy all these amazing cross-platform frameworks, each framework is only getting better and better each day!

by Nesa Bertanico at Sat Mar 20 2021 04:06:48 GMT+0000 (Coordinated Universal Time)


Royce Ayroso-Ong

Iterative Progress

Status report: it’s best to break up a problem into smaller pieces

Photo by Sigmund on Unsplash

Phew! Dumping all my time into that issue to handle inline images really took a toll since nothing I did could produce the intended results without unintended consequences. Yet oddly, I feel a sense of resolve to keep going and be active more than ever to get it reviewed and merged. I felt bad earlier this week because I couldn’t handle my simple front-end issues — but through the iterative struggle, I’ve talked with more people this week than in all my previous weeks working on Telescope combined. I guess I just had to admit that I *didn’t* have a solid grasp on the bugs that were plaguing me and that it’s best to reach out (and in some cases bug people — get it?) for help than it is to make them wait on a PR that will never get resolved. Moreover, the beautiful thing about iterative progress is that it’s iterative (and still progress). No duh right? Though. it’s something that has escaped my mind while working on my UI issues as I wanted to get the perfect solution for all to see, not realizing that I could break down the problem into easier-to-tackle smaller pieces.

Things to do for the weekend and next week: get #1791 and #1807 merged since they’re ready to ship, issue #1983 (since it’s something that I’ve been messing around with), figure out what is expected from this issue #1809 since it already looks good to me, and lastly — put the nail in the coffin to the BlogSpot bug by implementing #1975. Oh and I think I needed to file an issue and PR for the SearchResults still using the old posts URL. That’s it, see you guys next week.

by Royce Ayroso-Ong at Sat Mar 20 2021 03:42:27 GMT+0000 (Coordinated Universal Time)


Abdulbasid Guled

DPS911 Blog #9: Services, Services, Services

Cause my blog titles are very long apparently and they're causing a bug. Well, hopefully, this is short enough for you!

So after the debacle that was the Post microservice, I've finally been freed to work on some other stuff. So what was the first thing I did? I went right back to the posts microservice of course.

You can find that PR here. This PR addressed moving the posts microservice over to the frontend. This required me to make the posts url an argument in the Telescope Dockerfile so that it can be passed. This is because Telescope uses different urls for the services depending on whether the frontend is being built in development, staging, or production mode. Fun Fact: My boi Royce's PR was failing to load any posts in his PR's deployment because I forgot to change the SearchResults.tsx page to use the new posts service. A funny catch from today's meeting that I found.

From there, it was simply a matter of putting the posts service url inside the next.config.js file, exporting it inside the config.ts file, then using that in place of the TelescopeURL wherever it was used. An unfortunate side-effect of this was that I had to reintroduce feeds back into the posts microservice because they were needed in some of the posts related components. I'm sure they can be removed at some point, but it's a matter of when and how.

I also worked on modifying our base jest.setup.js to import the development env files. You can find that PR here. This involved removing the jest.setup.js files from any microservice folder since those were only there to pull the env file and I did so in the base jest.setup.js file. I kept the one inside the posts service to import the MOCK_REDIS value, but I might take that out because it gets added in the jest scripts. I'll need to work on that a little.

I worked on some other PRs this week as well. Here's a list of the following PRs that I reviewed:

I also did research on possible ways to incorporate private and public keys into our jwt authentication. These would be used to sign payloads, making them more secure as opposed to using a secret from the env. This would also be something that the staging and production endpoints can take advantage of. I'm currently working on this with Yuan and Ilya, and hope to have a PR for this soon. This would also be incorporated into Satellite, making it even easier. I'll have more to report on this next week, hopefully.

Next time, ver 1.9 releases. I was assigned to continue work on the search microservice that Ray started due to him being stuck up on his work. Juggling that with jwt auth related stuff will be tough, but definitely doable. Until then, stay tuned!

by Abdulbasid Guled at Sat Mar 20 2021 03:12:07 GMT+0000 (Coordinated Universal Time)


Mohammed Ahmed

Satellite and Redis….

My feeling when testing Redis

Oh man, we’re back.

At the start of the week, I was informed of a microservice module called “Satellite”. Now, Satellite is the place where we keep any commonly used microservice. I really wanted to learn more about microservices and I thought this is the place.

During this week, I’ve worked on at least 3 modules for Satellite: createError, Redis, and Hash.

hash, and createError were pretty simple for the most part

but redis…. oh my….

Now, porting the code wasn’t the hard part. The hard part was to get the test to work properly. All I really had to do was to send a PING request, so that the service could return the string “PONG”. Well, first I realized that I needed to run Docker. Okay, understandable. but then after, I need to create a promise function that will handle the PING command. “So, what’s the problem?” you might ask. Well… there’s just one tiny problem. Jest does not quit, because the service is still running. Even when I killed Docker, it would still hang.

Now, in terms of figuring out what to do, I need to think of a couple of things: Where should the test go, Telescope or Satellite? If so, do I need to have a kill command within the test? How long should the promise wait before it receives “PONG”? These are all questions that I need to figure out, otherwise, Redis will not be properly ported.

I think that’s okay though, it’s how you learn after all, with many, many retries (and rebases).

So, what will be my next step? To be completely honest, I need to play around with Redis a lot more. I need to understand how it works in order to make the tests. I won’t give up.

by Mohammed Ahmed at Sat Mar 20 2021 02:36:45 GMT+0000 (Coordinated Universal Time)


Chris Pinkney

Perseverance

​ The week started off with me leading our weekly triage meeting, with Mo taking notes. There's not too much to talk about, as only a few students were wounded in the telescope attack. I decided to spend a good chunk of meeting time focusing on older issues, as seeing 100+ issues, some of them from 2019, makes my neat-freak skin crawl. I imagine that this is a project most medium sized projects deal with. I also imagine that larger scale projects simply don't. All in all I enjoyed leading the meeting but was surprisingly nervous throughout most of it.

I also lightly reviewed one of Sir Dave's PR as I happened to be playing musical chairs recently with various PORT values for the plethora of services we're all working on.

​ On Wednesday I continued to work on the excellent abundance of requested changes to my Users microservice by Mo, Abdul, Doc Josue, and Sir Dave. My all-time favourite requested change would have to be this one. Previously I had two functions which would create different users that I could save to my database (to test routes that return all users, for example) but this change lets me instantiate as many users as I'd like with one function, and simply pass in different information when required. Awesome. I had really wanted to do something like this but couldn't figure out how to syntactically get it down.

I also started working on a fix to the pathetically un-styled About page. I'm still not crazy about working on front-end stuff, but I'd still really like to get better at it, plus, in the immortal words of Doc Josue:

but you know, it's good to know a bit of everything

Although clearly he's never tried to ask three+ designers to all agree on a design of something, as I had several changes, both on the GitHub page and privately on slack, requested of me at the same time. Lots of fun! I did learn a lot about CSS in the process, and what I already knew was a big refresher for me as I'm not the greatest in styling stuff. I do however always enjoy doing CSS stuff... for about the first 30 minutes, then I just wish I had gone into medicine instead. Big thanks to Ilya who spent some time with me teaching me a lot about various CSS properties and breakpoints, and helping me get the PR to where it is today. Hopefully it'll land soon.

​ Thors-day! I finally finished my micro-service approved and landed! Many hours of hard work finally over, with more hours to come. I also got to use the shiny force button too!

I genuinely couldn't have done it without all the guidance and help given to me by my professor and peers. The one thing I love about open source work is the communication ability coupled with suggested changes. Learning from those better than you is incalculably valuable. I'm really glad that I stuck with it and got it done, although It didn't get completed in as timely of a manner as I'd have wanted.

Oh, and naturally my power went out half way-through my final rebase, which resulted in:

PS C:\Users\Chris\Documents\Code\Repos\telescope> git status
fatal: not a git repository (or any of the parent directories): .git

Lovely. I had to reclone my fork and continue on my way. Not the end of the world but definitely annoying.

So what's next? I'm not sure! Figuring out what to do is always part of the fun. I'd like to get this wired up to prod next, then do a test-run to see if everything works as expected. Then move onto addressing several security concerns. Finally, I'd like to start working on actually implementing the users micro-service!

​ Friday. I started out by reviewing a minor Husky fix by Josue. How strange that dependabot upgrading our web hook program actually caused it to stop working. I mean that genuinely not as a snarky comment. It seemed like functionality completely broke in between versions and none of us noticed it for a few weeks. Strange. Thankfully Doc patched us up and we're good to go.

I also very quickly approved this PR (generously explained to us during on Friday meeting call) which adds an authorized middleware to Satellite which ensures that users must be authorized to execute specific routes. I look forward to implementing this, as security is something I was concerned with.

Speaking of our Friday meeting, Royce spent some time working on a fix for inline img tags in Telescope which he was kind enough to explain to us. I didn't think a simple task like this would be so involved but given how different blog providers handle images differently than another, it makes sense that Telescope can't accommodate every blog provided equally. Royce proposed working on a fix to detect Blogspot (our current problem child blog platform) and handle those images differently from the rest. I'm curious to see how this will handle random RSS feeds (i.e. those not on medium/blogpost/dev/etc.) and if it'll be a problem.

Finally I also released a minor fix to my users micro-service's Dockerfile for prod. This is what I get for copying and pasting without looking through it. However the fix also fixes my OCD as I had just noticed that a few files were not in the same structure as the rest of the micro-services. Phew. Now I can finally sleep well at night.

An overall busy but quiet week for me. I'm really glad that I was able to get the microservice merged and I'm really excited to move further forward with it.

by Chris Pinkney at Sat Mar 20 2021 00:02:49 GMT+0000 (Coordinated Universal Time)

Monday, March 15, 2021


Pedro Fonseca

Telescope 1.8

Telescope 1.8

Telescope 1.8 is a significant mark for everyone active and working on the project; the microservices and the new UI are now on our production server. Which means the new Telescope is flying. And I’m so proud of every active member that contributed to the project since we started porting to NextJS.

The new UI is not 100% implemented, but it is working well.

Learning while reviewing

I was reviewing a PR from Duke when I faced a JS shorthand pattern in his code that I had never seen before. I spoke with Josue, and he already knew the shorthand and pointed me to the docs. It is incredible how JS syntax is like a river that flows. I love it.

The process of reviewing a PR and learning while doing it is good; I used what I learned with Duke in my PR for Posts, Timeline and Post.

Posts component tree.

I spent more time than I expected coding the new Posts, Timeline and Post components. I had three different versions of it — the first version wasn’t so effective because to make the tablet and mobile sizes, the code was becoming so long. The second approach had a very different start point from the first; but I didn’t go so far with it because at a certain point, with my previous experience with the components’ code, I realized the second approach future would end up in the same place as the first one(very different start points ending in the same place).

There was a structural problem on the first two approaches because for Desktop, some parts of the code needed to be inside a container and these same parts should be outside of a container for mobile devices. I tried to solve everything using CSS in the first two approaches to avoid code duplication, but it was getting highly hardcoded. Wednesday night, two days before the release, I still couldn’t break the hardcoded part into a dynamic code.

Thursday, one day before the release, I started to discuss the code and the ideas behind it with Meneguini, and our communication presented me with what I was missing. I had to accept that I would have to solve the problem using more than CSS.

So I used React powers to refactor the Post component and accomplish what we needed.

1.8 Final thoughts

What we achieved in 1.8 is a result of our communication and effort.

Tony did a good job working on our theme and font issue, delivering what was needed very fast.

Yuan Lee worked on the avatar issue, which is extremely important to our UI. She was incredibly generous with me and the project, waiting for me to finish my PR and avoiding a good amount of conflicts in a possible rebase. So we could coordinate which PR would get merged first.

Yuan Lee also worked with Anton on dependabot, and now she is investigating a solution for our post title.

Duke did an excellent work fixing and improving various issues on Telescope’s frontend:

As always, Josue did an excellent work not only for our UI. Everyone that knows him also knows that he is the kind of player who plays in all positions. Sometimes I attempt to argue with him how JS is more beautiful than TS, but until now, I didn’t make any progress on that.

Josue's main positions.

Meneguini help was essential in discussing the post-component puzzle with me, even without being a Telescope member.

We still have a good amount of PRs to fire, a bunch of issues to file, and this is the evidence that we are heading in the right direction.

Thank you so much.
Pedro

by Pedro Fonseca at Mon Mar 15 2021 10:01:22 GMT+0000 (Coordinated Universal Time)

Sunday, March 14, 2021


Anton Biriukov

Weekly Telescope Podcast

In this blog post, I will provide you with some updates on what happened in Telescope in the last couple of weeks.

Photo by Werner Sevenster on Unsplash

Throughout the last 14 days or so we have continued the work on the microservices and UI 2.0, which was quite exciting to see. Besides, I have finally managed to find a good-looking and properly working changelog generator for our releases. Tested in our latest 1.8 release, it categorizes merged pull requests based on the labels specified in the configuration file. The produced result looks quite appealing:

Release 1.8 Changelog

Another thing that I was able to configure and land a PR for extending Dependabot coverage on almost every other package file in the project. It is extremely convenient and allows us to check for outdated dependencies in any package file from one spot — https://github.com/Seneca-CDOT/telescope/network/updates. Since we now have 9 files monitored, we applied a scarce update schedule to prevent having 9 PRs open all at once. I have been constantly keeping an eye on Dependabot behaviour and am relieved to say it seems to be doing its work well. One observation I can’t emphasize more is the frequency of dependency updates in our root package.json file. We are getting 6–10 updates for the file each week and so far I had to manually re-trigger checks for this package file to keep up with the pace. I really think we should make these checks daily specifically for this file. Taking into account that our team has been extremely efficient with reviewing dependency update PRs, such a shift should not bring much trouble. On the positive side, if we manage (which we constantly have in the last few weeks) to review at least one dependency update PR each day, Dependabot will likely open another one on the following morning. If not, it will just stay blocked and won’t trigger any CI runs. Apart from that, I have noticed that it is extremely convenient to use the Dependency graph page on GitHub to double check that all of your dependencies of up-to-date prior to the release. Instead of using npm outdated and creating PRs yourself, all you need to do here is just to click on the button. Works very well while the rest of the team debugs Chris’s docker creations!

Besides dependencies, I have also had a chance to update our release workflow once again to incorporate the new end-to-end tests. In the process, we have also identified and fixed a bug with some missing dependencies when our CI runs on forked repositories. Using playwright-github-action really made it easy to make sure all required dependencies are installed on the cloud server running our tests.

Lastly, I have also started looking into our SEO situation. I have conducted a few test searches on Google (e.g. try searching for ‘seneca open source’ or even ‘seneca telescope’) and found out that it is currently quite hard to find Telescope without directly searching for it. I am looking forward to improving our meta tags to try to bring Telescope above in the search results.

Summary

To summarize, here is the PRs that I worked on last week:

And the following is a list of PRs that I have reviewed:

by Anton Biriukov at Sun Mar 14 2021 23:40:42 GMT+0000 (Coordinated Universal Time)


Tony Vu

Mock it until it is real enough…

I finally got the approval from Telescope’s reviewers to start writing unit tests on my service. And it was not as straightforward as I would have thought…

My first struggle came from planning for my tests. I did not know how to start as my service includes a chain of middlewares as well as external HTTP requests. I literally spent three hours researching on the internet about testing with HTTP requests and middleware testing. Finally I decided that I should test whether my middlewares work correctly and whether the final json response body gave me what I expected.

During the process, I gotta learn more about supertest and nock . supertest is the HTTP client for testing environment and nock is a tool to mock HTTP request and response. I have also learned how to make the two tools work together. The hardest part of this testing is to convince yourself that I am on the right track because sometimes I felt like I was just doing random things without any concrete reasons. I felt proud of myself to be able to write these tests because they were quite hard to get for me.

Here are the results of many hours trying…

The joy when I run jest --coverage and it showed 100% is too much…

Isn’t this beautiful?

Thank you for reading.

Tony.

by Tony Vu at Sun Mar 14 2021 14:38:28 GMT+0000 (Coordinated Universal Time)

Working on feedbacks and improve my service

I have used the past week to improve my service and get it closer to be approved by the Chief. I have been able to spend a little more time coding for Telescope this week compared to last few weeks. And I aimed to make it work so I can start writing my unit tests.

One of the thing I learned this week from Dave is to notice little things in the codes to improve it. For example, there is one of his feedbacks that stood out to me is how he noticed that I made two requests, one HEAD and one GET in my API, and he suggested I could just made the GET request and pass along the body data. That was a subtle but very good suggestion and surely would improve my API’s performance a lot since most of the delay time is from network requesting.

I have also learned about different type of link elements that could potentially contain a feed url. Things like json, json+oembed, or xml+oembed. Most importantly, I learned how to select multiple selectors in cheerio and it helped me to make my code cleaner and more compact.

Everything seems to be ready and I became more confident that I would be able to develop microservices myself and be good at it. Now onto the testing phase of it that I expected even more strugglings…but that is the only way to grow in life.

Tony.

by Tony Vu at Sun Mar 14 2021 14:10:16 GMT+0000 (Coordinated Universal Time)

Saturday, March 13, 2021


Royce Ayroso-Ong

Tinkering With UI

Status report: what to do with you and I

Hey everyone, it’s time for that weekly status report on the work done this week. I’ve taken a break from the reworking of the DynamicBanner as I’ve become a bit stuck as to how exactly it works (something that I’ll have to reach out to others to understand), and I’ve instead focused more on getting the whole Search Page UI ported and reviewing other team members PRs for updating the UI. Let's start off with two of my own UI issues: handling the size of inline images and updating the SearchHelp component.

During this week’s triage meeting we determined that the issue with the image sizing has been a thorn in our side for quite some time. The problem is that we have to choose between having all BlogSpot photos scaled and centred or having all photos scaled to 32px and inline. With the former option, small inline pictures (like the author's avatar) get scaled up like so:

However, this is intended for most of the other photos in a post (centred and scaled). If we go with the latter option to scale all images to 32px and scale down the header then this is what we get:

You may think that this would be the solution, and it does indeed fix the issues with the blog above, but it has unintended consequences — see below:

Credit to Ilya for helping me test my PR

We mentioned earlier that this has been an issue for a long time, and we risk having this issue become one of those things that just never gets resolved — so I believe the plan here is to just choose one of the options and roll with it. I think if it were solely up to me, I would go with the first solution and keep it how it is currently as to not risk ruining all posts by making every image within the post unviewable — maybe even include the new formatting for the header so that it isn’t oversized.

SearchHelp 2.0 Update

How the current SearchHelp operates is that it appears as an HTML tooltip when you click on an icon. For the new UI, the planned design scrapped this in favour of filling the empty whitespace with the SearchHelp instructions like so:

My implementation

I did my best to implement the design as it is shown, but how exactly it will function is still being decided. I’ve received feedback as to how to go about this (suggestions like making it disappear once the user enters a valid search, changing the font, and removing the centring of the text), I will end up hopping on a call with Ilya to figure out how to make the SearchHelp run smoothly and disappear once the user searches for a post — maybe we even redesign the thing altogether, we’ll see. I’ve pushed both of my first attempts at the issues (see Fixes #1791 and Fixes #1807) and after my midterm tomorrow, this is what I’ll be putting all my focus into so that by the next triage meeting it will be good to go and we can all have a nice-looking Search Page.

Until then, I wish you all a productive and relaxing weekend!

by Royce Ayroso-Ong at Sat Mar 13 2021 04:28:03 GMT+0000 (Coordinated Universal Time)


Yuan-Hsi Lee

Telescope 1.8 Release

Release 1.8 for telescope is quite of a special one, because GitHub was down in our scheduled release time. According to our experienced professor, this is the first time he has ever seen GitHub down like this. I assume I'm lucky to see this in my first year of open-source.

Therefore, we did the PR review through video call. It was surprisingly efficient. We fixed our PRs and tried to get them passed to 1.8 release.

The issue I want to talk about is add the avatar component. The plan for avatar is to integrate with GitHub so that we can get users' profile picture. Before that, we still need a temporary avatar to replace the blank circle like the below picture.

Pedro suggested the avatar component from evergreen. But Dave wanted to stick with what we have, which is material-UI, there is also avatar component from material-UI.

However, material-Ui avatar component only provide a circle, it doesn't generate initials with the given name value, we'll need to generate it by ourselves. Therefore, my task is to wrap the material-UI avatar component with customize functionalities.

We want to make our avatar component to accept author name or image value. Image value is preferred, but if there is no image value, we'll take name value instead, and generate the initials for the avatar.

I want to talk about how to generate initials. I checked evergreen's avatar component code, it actually generates initials by the first 2 words in the name. It might work for most cases since most people have 1 word for first name and 1 word for last name (e.g. Piper Chapman). However, there are still some people having more than 2 words in there name, it could be a long first name with multiple words, or middle name.

The other problem is, should we use space or hyphen to separate the words in a name? This is a comment I got in my PR. Personally, I'd use space instead of hyphen. Hyphen is more like linking words to one part of name.

My initials generator code looks like this,

const initials = name.split(' ')
                     .map((splitName, i, arr) =>
                     i === 0 || i + 1 === arr.length ? 
                     splitName[0].toUpperCase() : null)
                     .join('');

This generator will separate words in names by space, and ignore the words other than first name and last name. splitName represents the current value, i represents the index of it, and arr represent the whole array of splitted names. i === 0 takes care of the first word of the name, and i + 1 === arr.length takes care of the last word of the name. For example, my friend Abu from OSD600 has 5 words in his name (Abu Zayed Kazi Masudan Nabi). His initials will be AN, instead of AZ. And for me, my name is Yuan-Hsi Lee, there is a hyphen in my first name to link 2 syllables, therefore, my initials will be YL instead of YH.

Above are how the name initials avatar look like.

The other PR which is related with this one is to centre the initials text. Somehow, this font makes the text slightly toward the top. Thanks to Anton, Ilya and Minh's help, the puzzle is solved and the solution is made.

by Yuan-Hsi Lee at Sat Mar 13 2021 02:10:47 GMT+0000 (Coordinated Universal Time)


Chris Pinkney

The Shippening: Part 1.8, Micro Madness

Earlier this week I decided to pause progress of my microservice today to instead test @humphd's dockerized Firebase emulator. I needed to ensure this works for three two reasons:

  • To get a grade in OSD700 by testing a PR
  • To ensure that I could insert data into a local, traefik, dockeriz'd, offline, version of the Firebase Firestore.
  • To ensure a smooth integration into my User microservice.

Last week I participated in a hackathon and used Firebase as my db of choice. I ran into some issues that were preventing me from inserting data into my local version of the db. I had assumed that this was due to the fact that I had never specified a private API key to use. This is a big issue as having to specify a private API key to use the offline emulator means that we have to authenticate each and every developer who needs to use Firestore (this also ruins our potential CI pipeline.) Assuming the worst, I reported my grim findings during our triage meeting on Tuesday.

Evidently this is not the case as I did not specify one when testing this PR. Hopefully the fact that the emulated version now works is due to PEBKAC, RTFM, or both. I need to release a draft PR in order to get more devs to test my code.

I tested this PR by making a separate (i.e. bastardized) version of my current microservice work that I use to send data to my Firestore, and pointing it to the dockerized version. It sounds simple but there were a few things that I had to do first: I had to lighty reintroduce myself to Docker, fix my Ubuntu WSL2 install (god I need to get Ubuntu working with my fakeraid system...), and do some reading and catching up about Firestore Emulator. After some time I finally got it to work and reported by findings on the PR.

Anyway, testing this PR was a lot of fun and a headache at the same time. I imagine this feeling is a lot like raising children.

After finishing my review, I proceeded to continue working my microservice. It's finally nearing a point where I can release it to the wild for review. I needed (and am eternally grateful for) guidance, direction, and help to push this thing into a PR. The only thing that I lack now is time. Time, and brain cells mostly. It's getting late in the week and the last 10% of something always takes as much time as the 90% did. Well, maybe not 90% in this case, but it sure felt like it given all the work I've rushed into this in the last few days.

Here is my leftover todo list:

  1. Finish up making some more tests, specifically ones that test Celebrate's validation rules a. Mostly done. I still could use several tests testing various smaller bits of code but those can always be added in later and spending too much time on this won't get a PR up sooner.
  2. Migrate to https://www.npmjs.com/package/@firebase/rules-unit-testing because @firebase/testing is deprecated a. Done.
  3. Create a basic README.md file a. Done. I'm rather proud of it actually. It does need some more work as I need to specify exactly how the service and docker + traefik work together to get data where it needs to go.
  4. Implement a list of current deficiencies and issues to discuss when the draft PR goes up a. Done. Kind of. I've been keeping track of this but at lot of them have been fixed and a lot of things have changed in the last few days.
  5. Implement Satellite a. Done. I was having some issues with this though, as after implementing Satellite all my tests were failing. My problem, as expected, was minor and silly: const { app } = require('../../index');, I wasn't de-structuring app from index, so all the function calls were failing as they didn't exist in app.
  6. Dockerize the microservice a. Done. I'm a great admirer of Docker (and traefik) but it's still confusing to me. I just need more experience. Mostly I lack the context of how things are interconnected to each other via multiple Dockers + Traefik.
  7. Add Firestore private key placeholders to env.production and env.staging a. Not done. I'm lacking both files so I'll need to iterate on this. Thanks, checklist!
  8. Tidy up the package.json run commands using npm-run-all a. Unnecessary since the entire microservice is run via Docker, having npm run commands are a thing of the past.
  9. Create an update (put) route a. Done, along with accompanying tests.
  10. Create a delete route a. Done, along with accompanying tests.
  11. Add date/time users were created (mostly for funsies) a. Not done. Scrapped for time.
  12. Add date/time users were updated (again, for funsies) a. Not done. Scrapped for time.
  13. Migrate unnecessary package.json dependencies to dev-dependencies a. Done.
  14. Finally, create a PR! a. Done!

So now that the microservice is finally up, I'm hoping for a lot of feedback. I still have things that I need to fix, though mostly minor things and documentation edits. I have a few tests that need to pass, and I'm hoping to make the time for it this week around my work schedule. I'd also like to expand on how exactly to test the PR on the PR page. Sometimes it's hard to translate what little goes on in my head, to fingers and keyboard.

Something else that needs answers that I didn't think about until now: How do I ensure my delete route can only be ran by admins? Any fellow proletariat can execute the bourgeoisie a delete route and delete a bunch of users. Questions, questions that need answers and riddles in the dark.

Minor list of things that did dun need doing:

Oh, and I just reviewed Tony's microservice! I unfortunately missed out on Abdul's and I kept my head buried in the sand for the last week or so trying to finish this up, so I'm glad I got to play around with someone else's. Got looks like a sweet library, that plus celebrate is really turning me towards NodeJS development.

Anyway that's it for me right now. Fun but busy week, and as always looking forward to the next one.

by Chris Pinkney at Sat Mar 13 2021 00:56:50 GMT+0000 (Coordinated Universal Time)

Friday, March 12, 2021


David Humphrey

Observations on Telescope 1.8

We're going to ship Telescope 1.8 later today, and I wanted to write about some of the things I worked on and learned in the process.

There have been a number of themes in this release:

  1. A new approach and pace to dependency updates
  2. Updating our UI for the 2.0 redesign
  3. Refactoring to Microservices

I'll talk about each in turn.

Dependabot Joins the Team

During the 1.8 cycle, we welcomed a new team member, Dependabot.

Anton and Yuan have both put a lot of effort into refining the process to the point where things are working quite well.  I've lost count of how many PRs I've reviewed or seen reviewed to update dependencies over the past two weeks.

We had a bit of a rocky start in our relationship, though.  Robots do what you tell them to do, and Dependabot is designed to create PRs.  It can create a lot of them quickly if you let it.  Like, a lot.  Also, it has many helpful features like automatically rebasing its PRs.

Moving fast and fixing things sounds good, but it has some unintended side effects.  For one, your CI builds go crazy.  I got a surprise email the first weekend we enabled Dependabot from Netlify to tell me that my "Free" account had used too many build minutes, and that I now owed them money.  It was interesting to see how different CI providers handled this, actually.  Vercel rate limited my account, so we couldn't do new builds, which seems like the right approach to me.  I almost ran out of build minutes for the month on CircleCI as a result too.

Now that we've tuned things, it's running a lot better, and the results are quite good.  The dependecies are constantly being updated with small version bump after small version bump.  I think that this was an intersting experiment to try.  It's motivated me to get even more test coverage, so that reviewing these is easier.  Our front-end needs tests badly.

2.0 UI Redesign

Pedro, Duke, Josue, Tony, Ilya, Huy, Royce, and Yuan have been iterating on the fronte-end 2.0 implementation.  It's been interesting to see two volunteers from the community (Pedro and Duke) doing the bulk of the work, and doing it well.  I'd like to get all of the front-end devs working at a more similar pace and collaborating more than they have during this cycle.  We have lots to do, and plenty of people to do it.  We just need everyone to dig in.

You can play with the new front-end at https://telescope-dusky.now.sh/ as it gets built in real-time.  The changes are significant, and it's very cool to see how quickly the major overhaul of the pieces is coming together.  I'm enjoying having people on the team with an eye for design and the skills to make it happen in the front-end.

The team has been learning to contend with TypeScript more and more, and the quality of the code is getting better.  I'm teaching Angular and TypeScript in another course at the same time, and it's impressive to see how quickly Telescope has been able to shift to TypeScript and embrace vs. fight with it.

Microservices

I've spent the bulk of my time during this cycle focused on paving the way for our move to a microservices architecture.  We wrote Telescope 1.0 as a monolith, and it worked well.  But one of the main goals I have for this change is to make it possible to disconnect the front-end and back-end (e.g., run on different origins) but still be able to share authenticaton state.  The students coming to work on Telescope find it hard to run the monolith locally on their computers, and I want to make it possible to develop the front-end without running a back-end, or make it easy to run just one service locally, and use the rest remotely on staging or production.

I gave a talk to the team last week about the ideas behind microservices, and how they are implemented.  It's new for all of the students, and has required me to do more of the initial lifting than I'd hoped. Extracting microservices vs. writing the system from scratch is a lot easier.  Much of what we need to do is already there, in one form or another, and the task is to figure out how to divide things up and what needs to get shared and how.

One intial benefit we're already seeing is the value of limited scope for each service. For example, I had a good talk with Chris about the User service he's writing.  His code is pretty much finished, but he didn't realize it.  I think he was shocked to learn that he wasn't responsible for all aspects of how a user's info flows through the app.  "Wait, you mean I only need to write these routes and I'm done!?"  Yes! The benefit of a microservice architecture is that you don't have to worry about how the data gets used elsewhere, just that it's available to other parts of the system that need it.

I've had to solve a bunch of problems in order to make this approach possible, and learned quite a bit in the process.

Satellite

My first task was to write a proof of concept service that could be used to model the rest.  I chose to rewrite our dynamic image code as a service.  The Image Service lets you get a random image from our Unsplash collection.  I also included a gallery to show all of the backgrounds.  I wrote unit tests for it, to give the team a model to follow with theirs, and then extracted a base package that could be shared by all of our microservices: Satellite.  The code in Satellite takes care of all the main dependencies and setup needed to write one of these services, allowing each service to focus on writing the various routes they need.  We've used it to write half-a-dozen service so far, and it's been working well.  This week I added middleware to support authentication and authorization in routes as well.  Speaking of auth...

Authentication and Authorization

My next task was to tackle authentication and authorization.  In Telescope 1.o, we implemented a SAML-based SSO sign-in flow with Seneca's identity provider.  This required a session to be maintained in the back-end, and since our front-end was being served over the same origin, we didn't have to do anything special to connect the two.

In Telescope 2.0, I want to leverage this same SSO authentication, but extend it with token-based authorization.  Here's a diagram of what that looks like:

The steps are roughly these:

  1. A user goes to our front-end app, perhaps https://telescope.cdot.systems or maybe on an external host like Vercel
  2. The front-end app wants to access a resource on one of our secure microservices, maybe User information.
  3. The user needs to login, so clicks the Login link in the front-end app.  A small bit of state (e.g., random string) is put into localStorage for later.
  4. The user is redirected to our auth service: api.telescope.cdot.systems/v1/auth/login?redirect_uri=https://telescope.cdot.systems/&state=a3f1b3413.  The URL contains two things: 1) redirect_uri containing a URL pointing back to the entry point of the front-end app; 2) our random state.   The latter is used as a ride-along value on all the redirects that are  about to take place, and lets the client know at the end that nothing was tampered with in between.
  5. The auth service receives the request (/login?redirect_uri=https://telescope.cdot.systems/&state=a3f1b3413) and stores the redirect_uri and state in the session. It then prepares a SAML message for this user to  authenticate, and redirects them to the SSO identify provider server.
  6. The SSO identity provider receives the request to login, and shows  the user a login page, where they enter their username and password.   This either works or fails, and in both cases, they are sent back to the auth server with details about what happened
  7. The auth service receives the result of the SSO login attempt at /login/callback and examines whether or not the user was authenticated.  If they were,  we create an access token (JWT) and the request is redirected back to  the original app at the redirect_uri: https://telescope.cdot.systems?access_token=...jwt-token-here...&state=...original-state-here...
  8. The frontend app examines the query string onload, and sees the access_token and state.  It confirms the state is what it expects (e.g., compares to what's in localStorage).  The token is then used with all subsequent API requests to our microservices.
  9. A request is made to the secure microservice.  The token is included in the HTTP Headers: Authorization: bearer
  10. The secure microservice gets the request, and pulls the bearer token  out of the headers.  It validates it, verifies it, and decides whether  or not the user is allowed to get what they want.  A 200 or 401 is  returned.

I've done almost all of the above in two PRs, one that is merged and the other going in today.  One thing that struck me about doing this work was how many of the core security dependencies on npm are poorly maintained.  It feels like much of this is too important to be left untouched or parked in some random person's GitHub.  I wish node had a bit more of it baked in.

Routers, Routers, Routers!

Breaking our back-end into microservices requires a some co-ordination in the form of an API gateway to tie together the various services.  In Telescope 1.0, we use nginx to serve our static front-end, and also as a reverse proxy to our monolithic back-end.  You can use nginx as microservices gateway too, but I wanted to try something different.

For Telescope 2.0 we're adding Traefik.  I've wanted to play with Traefik for a while, and this seemed like a logical time.  What I like about Traefik is how it can be integrated with Docker and Docker Compose so easily, both of which we use extensively in production.  Traefik is Docker-aware, and can discover and automatically configure routing to your containers.

I've found the docs and API/config a bit hard to understand at times.  It's a mix of "I can't believe how simple this is" and "why is this is hard to figure out?"  I find that I'm routinely having aha! moments, as I break then fix things.

Here's a quick outline of how it works.  We do all of our configuration via labels in our docker-compose.yml files.  Here's a stripped down-version of what Traefik looks like with the Auth and Image services:

version: '3'

services:
  traefik:
    image: traefik:v2.4.5
    command:
      - '--api.insecure=false'
      - '--providers.docker=true'
      - '--providers.docker.exposedbydefault=false'
      - '--entryPoints.web.address=:8888'
    ports:
      - '8888'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    labels:
      - 'traefik.enable=true'

  # image service
  image:
    build:
      context: ../src/api/image
      dockerfile: Dockerfile
    ports:
      - '${IMAGE_PORT}'
    depends_on:
      - traefik
    labels:
      # Enable Traefik
      - 'traefik.enable=true'
      # Traefik routing for the image service at /v1/image
      - 'traefik.http.routers.image.rule=Host(`${API_HOST}`) && PathPrefix(`/${API_VERSION}/image`)'
      # Specify the image service port
      - 'traefik.http.services.image.loadbalancer.server.port=${IMAGE_PORT}'
      # Add middleware to this route to strip the /v1/image prefix
      - 'traefik.http.middlewares.strip_image_prefix.stripprefix.prefixes=/${API_VERSION}/image'
      - 'traefik.http.middlewares.strip_image_prefix.stripprefix.forceSlash=true'
      - 'traefik.http.routers.image.middlewares=strip_image_prefix'

  # auth service
  auth:
    build:
      context: ../src/api/auth
      dockerfile: Dockerfile
    ports:
      - ${AUTH_PORT}
    depends_on:
      - traefik
    labels:
      # Enable Traefik
      - 'traefik.enable=true'
      # Traefik routing for the auth service at /v1/auth
      - 'traefik.http.routers.auth.rule=Host(`${API_HOST}`) && PathPrefix(`/${API_VERSION}/auth`)'
      # Specify the auth service port
      - 'traefik.http.services.auth.loadbalancer.server.port=${AUTH_PORT}'
      # Add middleware to this route to strip the /v1/auth prefix
      - 'traefik.http.middlewares.strip_auth_prefix.stripprefix.prefixes=/${API_VERSION}/auth'
      - 'traefik.http.middlewares.strip_auth_prefix.stripprefix.forceSlash=true'
      - 'traefik.http.routers.auth.middlewares=strip_auth_prefix'

I've left some comments, but notice a few things:

  • We enable Traefik for each docker container that we want routed using traefik.enable=true
  • We define a unique router per-service by assigning a name, for example: traefik.http.routers.auth.* vs. traefik.http.routers.image.*
  • We specify rules for how Traefik should route to this container, for example defining a hostname and/or path prefix: traefik.http.routers.auth.rule=Host(${API_HOST}) && PathPrefix(/${API_VERSION}/auth) will mean that https://api.telescope.cdot.systems/v1/auth goes to our auth container in production (local and staging use different hostnames).
  • We define middleware (e.g., altering URLs, compressing, authentication, etc) using traefik.http.middlewares.name_of_our_middleware and then adding the options for that middleware.  For example, stripping the /v1/auth prefix on URLs.

It's pretty much that easy.  Via environment variables we can use the same config in development, CI, staging, and production.  For 1.8 we're going to ship 1.0 and 2.0 running in parallel, and use nginx to manage SSL certificates, compression, caching (Traefik doesn't have this yet, but nginx is amazing at it), etc.  Here's what it looks like:

Our 1.8 Routing Setup with Both Telescope 1.0 and 2.0

Docker

To get all these services to work, I've had to spend a lot of time reworking our Docker and Docker Compose strategies.  I wanted to make it easy to run things locally, in GitHub Actions, and in production.  Doing that with the least amount of hassle required a few things:

  • I've created 3 different env files for development, staging, and production. One of the interesting things I learned this week is that you can a) use overlay docker-compose files (have one extend another); and b) define the files you want to use in the COMPOSE_FILE environment variable.  This means that running our entire system can be reduced to docker-compose --env-file env.development up or docker-compose --env-file env.production up.
  • Our approach uses environment variables a lot for the different configurations, and it's not possible to share them between multi-stage docker builds.  Josue and I spent a bunch of time scratching our head over this.  Apparently you need to use build time ARGs instead.  By carefully using a mix of ENV and ARG we were able to accomplish our task, but it wasn't obvious at all.
  • We used to use sed to replace strings (e.g., domains) in our nginx.conf file, but I learned that you can do it with a *.conf.template in the dockerized nginx.
  • There's a nice npm module that wraps docker-compose for JS devs who don't know how to invoke it directly.  I wrapped all of our microservices invocations in some JS scripts to make it easier for the students.

E2E Testing

With our microservices in place, and Docker and Traefik taking care of routing everything nicely, I needed to figure out a solution for end-to-end testing.  Writing unit tests for the image service was fairly straight forward, but the auth service requires complex interactions between at least 4 different apps, 2 of which need to run in Docker.

I spent quite a while re-working our Jest config, so you can run unit and e2e tests from the root of our project.  I've never had to create a Jest configuration this complex, but lots of other people have, so there were good docs and lots of examples on GitHub.

In the end, Jest can take care of starting and stopping our microservices in Docker, running the tests across all the various services, and do it all with a single command from the project root.

Using this I was able to write automated auth tests using Playwright.  I tried a few different frameworks before settling on Playwright, and it was the need to handle redirects smootly in automated tests that Playwright did better than anything else.  With Playwright I can write tests that run in headless versions of Chrome, Firefox, and WebKit, and simulate user interactions in the browser with only a few dozen lines.  It's literally amazing.  Seeing the tests running in GitHub Actions was very satisfying.

Conclusion

I'm hoping to see all this work shipped in a few hours after our class meets.  It's been a lot of work for me to get all these pieces in place, but now that they are, we should be able to write, test, and ship code a lot faster.

I'm hoping that by 1.9 we'll have all the services roughly in place and we can start moving away from the 1.0 back-end toward the new stuff.

by David Humphrey at Fri Mar 12 2021 16:02:36 GMT+0000 (Coordinated Universal Time)

Thursday, March 11, 2021


Abdulbasid Guled

DPS911 Blog #8: The joy of writing services

With all the failures of them not working as intended along the way. Because the only easy day was yesterday! (And yes, I took a Call of Duty reference here, lots of fun with these games back in the day, not so much now).

This week's blog post is up much earlier than I normally write these. This is because I wanted to focus more on another assignment I have due on Friday. The topic of today: The Post Microservice!

This one was a big one. You can find the PR for this one here. This PR took so many commits that I felt like I Was doing too much. This was important though, since it allowed me to iterate on a new service as much as possible while making mistakes. I never used Docker before working on this issue. I also never wrote unit tests for databases before too. I needed to write unit tests that interact with a mock redis so show that my service works. This took time, but I was able to finally get it working and I'm so happy with the results.

First off, I had to lay the foundation. This involved creating a folder, initializing the package.json, installing the dependencies I needed for my service, and creating the Dockerfile. David, thankfully, went over Docker so that alleviated alot of the worries.


The last piece was to add my service to the Docker-compose.yml file as well as add redis there. This piece would've been much harder if not for the image service being there already, allowing me to use it as a reference. I thought it was amazing here, that I could specify an environment tab, and docker would use it when the service runs. This allows me to specify process.env.POSTS_PORT, knowing that I can define it in the docker-compose.yml file and it would work without any problems. I also included the port and API url in all the env files to be safe as well.


I thought that would be it, but that ended up not being the case. Redis required that I put the container name there, so that my service can connect to redis properly. Without it, my posts service would always return an empty array. This was the bane of my existence for the first 2 days. After some time, this was the result:

SUCCESS! Now to write the unit tests. This took me about a day, since I needed to learn about supertest, the dependency needed to get this done. I also needed to figure out how to mock redis so that I can insert fake data into the mock redis database. This was way easier than I made it, because the code to do this was already implemented last year when the original backend was made. There was code in place to use a mock database based on an env value. I simply had to include a jest.setup.js file, import the env settings and set MOCK_REDIS=1 and jest would use the mock redis instead of the real one when running my unit tests!

The last problem here was my tests were taking too long, and thus failing because the network request took too long. Funny enough, this was due to a misunderstanding of how Promises worked. The addPost function returns a promise implicitly, so I didn't need to explicitly return one via Promise.resolve (Or Promise.all in the case of an array of posts). This was adding extra time to the network request that I simply did not need. Removing that solved all my issues!

It's not perfect. Currently, the e2e unit tests are failing CI. Assuming I get those fixed, this PR should be merged this week. This service is huge, as it's needed for many other services. It also needs to interface with other services that aren't available. For instance, the User Microservice that Chris is currently working on is where we're storing the posts, and the search service, currently stalled atm, will use elasticsearch to index posts. This service will absolutely need to be updated at some point, and I'm looking forward to doing that. I learned alot about Docker while working on this service, as well as redis and supertest. I might have to start using it alot more than I currently do.

Next time, a quiet week. With this big service in, I can turn my attention to other issues and PRs I wanted to work on. Until then, stay safe and see you guys next time!

by Abdulbasid Guled at Thu Mar 11 2021 18:15:25 GMT+0000 (Coordinated Universal Time)

Wednesday, March 10, 2021


Krystyna Lopez

Seneca Digital Health Hackathon


Seneca Digital Health Hackathon

When I was preparing for the interviews I read many articles on how to answer one or another question. Of course, most of the answers have to be tailored to a specific job posting. One question that I was asked during one of my interviews was to tell about my hobbies. I never came across this question neither I thought this question was something that I could be asked. After my interview, I have looked into the potential answers for the software developer position. To my greatest surprise, I find very interesting hobbies such as teach coding classes, contribute to the Open Source community, and of course Hackathon. The last one was the most interesting to me. I decided that why can't Hacathon be my hobby either. A few weeks later my school friend mentions that there will be Seneca Digital Health Hackathon. That was an opportunity to try out my new hobby. 
Days before Hackathon. I and my groupmates had to pick a challenge set that we will work on. Our group decision narrowed to Cannabis Product Lifecycle. The idea was to develop an application that will track the Cannabis plant lifecycle from seed or clone until the time it is ready for sale. There were many different ideas to approach to solve this problem but our group try to keep the app simple to use for the end-user and at the same time give enough useful information for the company to predict future revenue or avoid potential loss.
First day of Hackathon. Hard work on gathering all the necessary information. Our group was thinking about using some external APIs in order to have enough information to make decisions like what conditions required to cultivate the plant, how many days need for certain seeds before they can be harvested etc. This step was more difficult than our group expected as there was not enough available information.
The second day of the Hackathon. A lot of coding needs to be done. That is not including the part of the database. As we tried to keep our application simple and use external APIs we didn't need a big database.  Our application should be able to store enough information about the plant to make future decisions. 
Snapshot of our database:

In order to implement the application, we used Spring Boot as our backend. Functionalities that were implemented on the backend are 

1. Get All Categories 

2. Get Category by Id

In our case categories consider the stage in which seed or clone currently is (for example planting, harvest, etc). Probably in this case also good to have an option to create a category for future use.

3. Get All Purposes

4. Get Purpose By Id

Purpose in our database served as a table that will store the end purpose of the Cannabis plant for example hash, oils, etc).

5. Create seed

6. Update seed

7. Get seed by Id

8. Get all seeds

The seed table holds all the information that users need to store in order to perform business analysis. 

There are many more things that can be added to this basic functionality but this was the very first time to participate in the Hackathon. It was not easy and many challenges came into my way. At some point in the time, I was thinking that it is not a hobby that I can enjoy and I should just stop but I did not. I took a lot from this Hackathon and look forward to the next one. I also learned that it is not necessary to succeed from the first try but it is important to take a good lesson from the mistakes that were made. I also want to say thank you to my group leader Jennifer Bellafiore your knowledge and experience came in very handy.






 

by Krystyna Lopez at Wed Mar 10 2021 17:42:43 GMT+0000 (Coordinated Universal Time)

Sunday, March 7, 2021


Royce Ayroso-Ong

What It’s Like to Compete in a Hackathon

Taking part in Seneca’s Digital Health Hackathon

Image by Seneca

Well, study week is coming to an end. These past couple of days have been amazing working with so many people to put my team's solution together for Seneca’s 2021 Digital Health Hackathon. Team AREN (that’s us!) took on the challenge set of a Patient Data Consent System:

A data access system that makes it easy for medical professionals to gain access to patient data but also preserves confidentiality and privacy of patients. Patients can give consent to share only what they want to share and medical staff can only gain access to what they need and what was released by the patient. In the event of an emergency, all critical data for treatment must be released to medical staff.

We had five days to design a software solution that utilized blockchain technology to solve this problem, and throughout this blog you will get a glimpse as to how we went about it.

The Beginning

First, we had to figure out the scope of what the requirements were — what do the judges specifically want? After looking at the last year's winners, we noted that the actual solution did not necessarily have to be fully implemented and coded. We saw that they demoed their app with Adobe XD, something that my partner NesaByte and I had a lot of experience using. Quickly we started thinking about how our app would work. In my opinion, one of the best ways to design a patient-centric app is to think about the actual use cases. We got to work on Adobe XD designing and prototyping the initial idea. What we came up with is an app that the doctor can use to see the patient data — and with our initial idea done, we went into the first mentorship meeting to present our work.

Long story short, our idea got roasted.

Don’t Take It Personally, Learn and Move On

The only question we asked the sponsors were “are we on the right track?” Quick answer — no. To give a little bit of context, the sponsors (CapitalBlockchain) were to be the ones to meet with us twice throughout the hackathon to guide us to the solutions they wanted, and the mobile app that we presented to them was not it. Two things they mentioned were that they wanted it to be fast to use, quick and simple while still providing security. What we had was an app with a lot of functionality and detail, but at a cost of its simplicity. We had to keep in mind that our target market included the elderly and those who may not be well versed in technology.

We took our harsh lesson to make it easier to use and went on to make version 2.

Last Mentorship Meeting and Our Final Idea

We scrapped the idea to make it a mobile app and opted to make it a web portal since it could be used from the browser (while also still being able to be used on the phone). We went to prioritize ease of use and made changes so that the user flow for our main use case (inputting and hiding patient info) was as smooth as possible. Take a sneak peek at the design:

Login (logo courtesy of NesaByte)
Patient Info (unchecked boxes are hidden to doctors)

The sponsors thought the idea was better than the last meeting but still had opinions on minor things to change here and there — which we noted and after the meeting included in the final app design. What we had was a fast, easy-to-use web portal design that fulfilled our use cases for the patient data consent system and the requirements of the challenge set.

Closing

We crafted a 5-minute video presentation: script, slides, story, everything (I should note it was actually quite a challenge to shove everything within 5 minutes, there were just so many things to cover). Our idea may be a bit rough around the edges and we may be lacking some of the business details but one thing my teammate and I possessed was the ability to be enthusiastic and to talk. It was literally crunch time recording that presentation video, we were already awake for about 26 hours trying to wrap everything up and you could definitely see it on our faces. We put our heart and soul into the design solution and video… and the only thing left to do was to submit.

We won our challenge set. I couldn’t believe it. I literally screamed “lets gooo!” as they announced Team AREN. Although we didn't win first place in the hackathon, winning the challenge set was something I was proud of. Thanks to Seneca for putting this all together, Team AREN will be returning next year bigger and better.

by Royce Ayroso-Ong at Sun Mar 07 2021 10:09:38 GMT+0000 (Coordinated Universal Time)

Saturday, March 6, 2021


Ilya Zhuravlev

Bye Gatsby (Episode 2)

Greetings Everybody,

This is the second episode of “How I contributed to switching Telescope’s Front-End from Gatsby to Next.js” — the last episode is coming soon.

This week’s open-source adventures started with issue #1376, where we wanted to display the “No Results Found” image on the search page if the search did not return any results. This was another issue that I took from a person who was no longer contributing to Telescope.

No Results Image

The first problem with this PR was that the contributor proposed to use Facebook’s No-Results image — and we definitely don’t want to do that.

Facebook Lawers when you forget about Copyright (Original Image source)

So the first part of solving this issue was to find an appropriate, royalty-free image that people would like. With this task, I went to Unsplash.com and Undraw.co to look up something that everybody would like.

My findings:

  1. Mistaken Scientist
  2. I am sorry
  3. Doggo
  4. Empty
  5. Magnifying Glass

What’s funny is that when I tried to search Undraw for something like “No Results” — no results were found. Can this be considered a success?

But everybody liked the picture Undraw was using, and gladly we were able to find it and add it to our project.

Next step: Code it

As we still had Gatsby front-end as our main front-end, I needed to implement the no-results image in both front-ends, although the code was almost the same.

Gatsby:

Import Image:
import NotFound from ‘../../images/noResults.svg’;

Add styling:

noResults: { 
 display: ‘flex’, 
 alignItems: ‘center’, 
 flexDirection: ‘column’, 
 margin: ‘2rem’, 
}

Place image:

<div className={classes.noResults}>
  <img src={NotFound} alt=”No Results Found” 
    height={200} width={200} />
  <h1>No Results Found</h1>
</div>

Next.js

Everything was the same, except for the image export:
const NoResultsImg = ‘/noResults.svg’;

After that, I waited for Reviews, rebased it on the up-to-date master and merged.

Fix onClick area in Next.js Headers

After porting headers to Next.js, we mentioned that the area on which the buttons reacted to click and started redirecting to another page was limited only to the text on those buttons.

This sounded a little bit weird because this is the code that we had:

<Button color=”inherit” size=”medium” className={classes.button}>
 <Link href=”/”>
  <a className={classes.links}>Home</a>
 </Link>
</Button>
<Button color=”inherit” size=”medium” className={classes.button}>
 <Link href=”/about”>
  <a className={classes.links}>About</a>
 </Link>
</Button>

Apparently, after looking on the web for some time, I found out the correct way to do the navigational buttons here:

<Link href=”/” passHref>
 <Button color=”inherit” size=”medium” className={classes.button} component=”a”>
  <p className={classes.buttonText}>Home</p>
 </Button>
</Link>
<Link href=”/about” passHref>
 <Button color=”inherit” size=”medium” className={classes.button} component=”a”>
  <p className={classes.buttonText}>About</p>
 </Button>
</Link>

The difference here is the following:

  1. <Button> needs to be wrapped inside of the <Link> tag (<ListItem> for Mobile Header, as we are using <List> there),
  2. <Button> needs to have component=”a” as one of the parameters, so Next knows you are using this button for the link,
  3. <Link> needs to have a passHref parameter, so Link passes the href property down to its child <Button>.

I also renamed the “Link” styling to “ButtonText”, as the text in the buttons was not a link anymore.

So, if you have the same issues with buttons and links in Next.js, take a look at the above snippet.

Then it was Reviewed, Rebased, Merged — all as usual.

Source

As Always,

This was it for this episode, the last episode is coming soon!

Stay healthy, wash your hands, wish your Mom and your other beloved females “Happy International Women’s day” — and have a great day!

Telescope Page: https://telescope.cdot.systems/
Telescope GitHub — We welcome all contributors! — https://github.com/Seneca-CDOT/telescope
My GitHub: https://github.com/izhuravlev

by Ilya Zhuravlev at Sat Mar 06 2021 15:47:33 GMT+0000 (Coordinated Universal Time)


Chris Pinkney

Hackathon update!

Today marks the closing of my Hackathon week, aka the week where I neglected everything in the world, while hacking away on an app.

This week my partner and I participated in Seneca's Hackathon. It was an amazing experience that I highly recommend if you're looking to get humbled, to have a good time, or both. Preferably both. The hackathon featured keynote speakers, mentors, advice panels, and late night mixers!

There were several categories to participate in, but my partner and I decided to participate in the Digital Health/Vaccination Passport to Support category:

Enable organizations to verify health credentials for employees, students, customers, and visitors entering their site based on criteria specified by the organization. Privacy and integrity is central to the solution, and the digital wallet can allow individuals to maintain control of their personal health information and share it in a way that is secured, verifiable, and trusted. Participants can attempt to model processes to implement contact-tracing using digital health passports, to identify other individuals who accessed the same spaces/areas in a set period. Sponsors will provide instances and technical support for teams.

Every team in this category was given the following challenge (but given leeway to be as creative as possible while still maintaining the objectives outlined):

Your challenge here is to make it simple, safe and secure to check vaccination status of individual and help institutions such as your school or office to verify it, to ensure safety of the people returning to them.

Here is my mockup of the app I wanted to design:

Given that this is a category centered around privacy and integrity (two things which I value) naturally, this category spoke to me. It also seemed relatively easy given the uh... current climate of things around the world right now. My team's idea was to have a security and privacy centric app which allows users to quickly and easily display their vaccination and disease testing history with just a few clicks on a screen, which then spits out a QR code for easy reading and verification. We also wanted the app to also educate users on vaccines and diseases.

Luckily my wonderful girlfriend Hope curated information about various diseases (as advocated by the Ontario and Canadian government.) Another thing I wanted to focus on was life after COVID, so my team and I ensured the app also supported a wide variety of other diseases, such that the app doesn't succumb to a simple life of bit rot after this is all over. Over the three days of the hackathon I personally spent about 25-30 hours working on the app.

While we didn't win (and am too embarrassed to post the source code which is not hard to find) I really enjoyed my time and can't wait for the next one. My only two regrets is that it couldn't be held in person, and that I'll be graduating soon, meaning I'll be unable to participate in the next hackathon.

The hackathon was sponsored by Salesforce, Sightline Innovation, Microsoft, and others. I attended two keynotes speeches during my time, both very interesting, particularly the talk from a Microsoft CFO regarding the state of IOT and various industries facing integration and challenges . I have no experience in IOT but embedded devices are something that I've always wanted to get involved with. I asked a question during the keystone on how to get involved and was told to "pick up a raspberry pi and start hacking" (paraphrased.) So hey, maybe I'll do that?

10/10 experience, highly recommend. I'm excited to attend another hackathon outside of my school.

by Chris Pinkney at Sat Mar 06 2021 03:34:42 GMT+0000 (Coordinated Universal Time)


Yuan-Hsi Lee

Telescope 1.8 Release

After midterm week and study week, it's time to plan for the 1.8 release of telescope. In this post, I want to talk about my plans and the projects that I'm looking forward to seeing them be implemented in the release.

In the last release, I was working on updating dependencies and the auto-update tools including Renovate and dependabot. After working with that, I started to have an interest in GitHub CI/CD, and plan to learn to write GitHub action.

In OSD600 (the first open-source course that I took in Winter 2020), we implemented GitHub action workflow to apply code formatter and linter to our repo by selecting template .yml file to create our needed actions.

When working with dependabot, it also uses .yml file to implement GitHub actions workflow.

After introducing dependabot, the PR that was generated by dependabot is a bit of confusing for contributor. For example, what should we do for the PR? How to test it? Isn't the major bumping to risky? What to do if the bumping causes test failure or error?

Therefore, I wrote a document to explain a simple workflow for handling dependabot PRs.

However, it will be more convenient to append the documentation with the PR so that contributors can read the document when they're reviewing the PR.

Currently, it is not possible to customize the description of dependabot PR. Therefore, I tried to append a comment with the link of dependabot documentation whenever the dependabot PR is created.

The other challenge for me in release 1.8 is about UI 2.0. Thanks for my team member Pedro assigning me a CSS and JS involved issue. The issue is to adjust font-size dynamically in order to fit the title div size. For example, if the default font-size causes the title overflow (which means the title is too long), the title's font-size should be set to smaller size in order to display the whole title with any wrapping. This is a very interesting issue. After research, in order to adjust the font-size by character number, the scrollHeight and the clientHeight will be involved. By comparing these 2 value, we can get the result of if the text is too long, then, we adjust it's font-size, until the text (title) can be fully displayed.

Currently, I'm trying different plug-ins and to write a function myself. Hopefully, I can finish this feature by release 1.8.

by Yuan-Hsi Lee at Sat Mar 06 2021 03:18:48 GMT+0000 (Coordinated Universal Time)

Thursday, March 4, 2021


Tony Vu

First solo attempt in the world of microservices

Understanding microservices is a thing, actually developing one is another thing…

I don’t know how I should describe my feeling when I started working on the Auto Feed Discovery service which is part of Telescope’s microservices initiative. A bit excited, a bit nervous and maybe a bit confused. Despite having gone through two Udemy’s courses (20 hour long/each) on microservices, I am not that confident that I can develop one myself with minimal guidance. You know how in the course, the instructors normally hold your hand through out and you simply copy the code. I might understand it a bit more after the course but to write a service is actually a big challenge.

One good thing is that this one is not too complicated and at least I know where to start with the tips from professor Dave. Basically, I need to chain a few middlewares altogether before it can return an expected list of feed URLs. Some of them are very straightforward like checking whether the URL is valid or checking whether the URL is live. The most challenging part of this project is to discover the feed URL from the provided blog URL. My initial thought is to cut the corner and look for an npm package from whom have already done it. Surprisingly, there are not that many and the packages are not offering exactly what I need. After I tried out a few of them and failed, I decided to explore the code base of one package that might have some codes that I can use. And that was when I learn about cheerio, a fast, flexible & lean implementation of core jQuery designed specifically for the server. By looking at the logic of the code, I realized how the author discovers the feed URL from the blog link’s HTML by using cheerio to select the link element with type “application/rss+xml” and your feed URL will be in the href attribute. GREAT!!! Everything has become much easier now. I even thought about create a simple package just to discover feed since I don’t think there are a strong package to do that right now. It could be a future mini project to work on. I also think cheerio could be a powerful tool if I want to do web scraping for my ML/AI study and experience. I bet they have been using it already.

Dave has provided quite a list of feedbacks on my PR and I am planning to get on it sometimes this week (just noticed it’s already Thursday by the time I wrote this blog LOL). Well, it is what it is. When you are too busy, you lose track of time.

Thank you and until the next blog post!

Tony.

by Tony Vu at Thu Mar 04 2021 13:28:06 GMT+0000 (Coordinated Universal Time)

Getting it ready for UI 2.0

This is a late blog but it is due so here I am…

Thanks to Pedro’s initiative, we have got it started on implementing the UI 2.0 which is supposed to be transforming the whole Telescope’s user experience with the new UI. And I am happy to play some part of it. My first task is to find a good way to adjust the theming object that I was responsible before to the new palette. The new theming object would have less properties as well as a shorter list of color. We would like not to overcomplicate things like how we did it the first time, so better be concise this time.

I and Pedro had been communicating effectively on Github as well as on Slack to understand the need and consult each other’s opinion on the matter. Finally, we agreed on the final version and I also got to update the front end doc, so it would reflect the new palette. I am quite excited to see the new UI. I think the team has been putting in hard works and try to work around the clock to make it happen. Personally, I just wish I could have been more involved. After this semester end, I plan to take a deep look at creating a framework so I can better manage my time in order to do more things. One thing that I have already changed is my sleep cycle. I wake up at 5am now so I can put most of my demanding works in the morning when I am most productive. Hopefully this will create more hours so I can contribute more.

I checked the dev environment the other day and we had a new header, new blog body with the author’s name on the site. It looks refreshing now and gives me a willing to read up. Good job everyone!

Thank you and until the next time!

Tony.

by Tony Vu at Thu Mar 04 2021 13:03:29 GMT+0000 (Coordinated Universal Time)

Wednesday, March 3, 2021


Ilya Zhuravlev

Bye Gatsby (Episode 1)

Greetings Everybody,

This is the first episode of my story about “How I contributed to switching Telescope’s Front-End from Gatsby to Next.js”. During this week, I will be posting 2 more parts of this story — for each week I was contributing to the switch.

Gatsby or smth, I’m not a front-end dev

On the 12th of February 2021,

David filed Issue #1733 named “Switch front-end to next.js”, where he posted a list of all the changes and fixes that needed to be done to be ready for the switch. By that time, a significant portion of Next.js Front-End has already been done, so we needed to finalize the work our team was doing. And so we started working.

This was my first experience working with both Gatsby and Next, so I needed to read some documentation first. Nextjs.org was very useful to quickly understand the basic concepts.

The issue that I took was actually a PR from one of the previous students, who was working on Porting Desktop and Mobile headers from Gatsby to Next. Initially, in the Gatsby version, we had three files responsible for the Header: index.js, DesktopHeader.jsand MobileHeader.js. The logic behind those components was that index.js was imported by the page who wanted to render it, and index.js will take a look at the width of the display and decide if DesktopHeader.js or MobileHeader.js should be returned — and would return the preferred header.

Mind-Blowing, am I right?

The student who was working on this issue before me, however, when porting headers to Next.js, decided to move all of those files into one Header.js file. But the code that they published was not ready to be merged, as a couple of changes have been required.

“Easy fix” - I thought.

Well, It was too early to make judgements.

Changes that needed to be done:

  • Remove public/logo.svg as it was already moved to be rendered in another component,
  • Remove menu.svg and pull it from Material UI,
  • Remove the styles/Header.module.css and make it a Material UI MakeStyles function,
  • Turn Header.js into Header.tsx (Rewrite JavaScript to TypeScript),
  • Add Copyright&copy; {new Date().getFullYear()} to Header Footer (in Mobile Header, we have a Footer, that displays the copyright and current year. (This Dandy expression above was provided to you by Chris),
  • Move Header from pages/index.tsx to pages/_app.tsx.

Well, this did not take me too long to do, so I posted my changes and focused on other homework.

… But little did I know, because the PR I was making changes to was pretty old, I accidentally pushed not only my changes but all of the changes that my colleagues pushed to the Master branch during the time my PR was idle.

Oh no

So, this PR was ruined and I needed some help.

Big-hearted Anton Biriukov volunteered to help me with the GitHub issue, so we went on a call, and then almost the entire team was on the call. (Felt like a very popular Twitch Streamer, not gonna lie).

We first wanted to just Cherry-Pick my commits out of this nightmare and create a new PR, but then David came in.

“This is going to take more time to fix this mess than to rewrite it completely” — The Father of Telescope.

After that, we just abandoned the old PR and my changes to it and rewrote the whole thing from scratch. Copy-n-Paste all three Gatsby files, import Link from ‘next/link’;, change the extension to .tsx, fix a couple of styling and code errors — and we are done.

My new PR landed, got reviewed, merged … and caused a couple of new bugs the be fixed. For example, the Login component suddenly realized that he is Special, and started a riot by not following the styling of the Header. How did I suppress the Riot? Well, I will tell you about it in my next blogpost ;)

As Always,

Stay healthy, wear a mask, wash your hands, and have a great day!

Telescope Page: https://telescope.cdot.systems/
Telescope GitHub — We welcome all contributors! — https://github.com/Seneca-CDOT/telescope
My GitHub: https://github.com/izhuravlev
David: https://github.com/humphd
Anton: https://github.com/birtony

by Ilya Zhuravlev at Wed Mar 03 2021 18:08:52 GMT+0000 (Coordinated Universal Time)

Tuesday, March 2, 2021


Ilya Zhuravlev

Hello again!

Hi, this is me — Ilya! Yes, I haven’t been writing a lot during the past year, but here I am again.

Source

What happened during the last year?

Well, you might have heard about a tiny incident called the World Pandemic and COVID-19, and Canada treats it very seriously — so there was a lot of “getting used to the new lifestyle locked inside of your house”.

But overall, here is a shortlist of the most interesting events in the last 12 months:

  • I finished the 6th semester of Seneca’s BSD program;
  • The COVID-19 Lockdown started, and all of us got very happy at first…
  • … and really bored and trying to get used to the new way of living some time after;
  • I got hired for a Co-op at TD Bank Personal Banking Quality Engineering — Metrics Initiative (I might be posting a little bit of what I did there later);
  • “Among Us” literally exploded after 2 years of being released;
  • I enrolled in the 8th semester of the BSD program (yes, 8th, not 7th, I know it is weird, but this is the way for me to be able to graduate by the end of this year);
  • And, probably the most interesting one — I am back at the Telescope project.

Telescope

A photo by @zt1970

As you might recall from my previous posts, I was a part of the team that gave a start to the Telescope Project for Seneca Open Source Development back in 2019. And our baby grows extremely fast! There are so many things going on at Telescope right now that I am just not able to describe everything that happened there during the last year.

As people who work on it come and go, the goals of Telescope change. Last semester the group of students working on it were working hard to build a new cool Front-end using Gatsby — and this term we were so happy to get rid of it and switch to our Next.js version of Front-end. This doesn’t mean that we don’t appreciate the work of the previous term students — no, they did a great job, but the reason we decided to switch was that Gatsby is just built for other purposes, and we had a lot of problems supporting it.

By the way, I will be posting about how I contributed to switching the Telescope Front-End from Gatsby to Next.js during the next week — so stay tuned!

Last week’s coding adventures

This week started for me with a Triage 1.8 meeting — and I was leading it. We went through all of the PRs in the Telescope repository and discussed a significant number of issues to be done for the 1.8 release — it was productive. We recently switched to the Next.js Front-End, and now we have 2 major milestones regarding the Front-End: UI 2.0 and Dark Mode.

At first, Dark Mode was my priority, but as I realized a bit later, it will be easier to switch to a new UI first, and then fully implement the Dark Mode feature in our new UI.

As this week was dedicated to midterms, there was not a lot of room for development. At the same time, I was able to file an issue about the Search page height, review a brilliant PR by Huy Nguyen that was closing it with some changes to the UI, and got to my coding work.

Contributors Card

Thanks to Chris Pinkney, we finally have a styled About page. At the same time, adding MDX to Telescope caused a couple of bugs that needed to be fixed. One of them — the styling of the GitHub Contributors card.

MDX is a powerful tool that lets you use markdown to write content for your web-pages. As Chris was the one who added it to Telescope — I will let him tell you more about it here.

MDXPageBase.tsx theme fix

The bug was actually caused by two things: MDX and the new colour palette, so the component was referring to the wrong component. Fixing colours was easy, however, then I realized that colours are still not correct. Why?

When using MDX to render our About page, we added the About footer inside the MDXPageBase.tsx — so MDX is now the parent of our About footer, where the GitHub Contributors Card resides, and MDX styling controls the text colour of the Contributors Card as well. This aspect should be looked into but for now, I just changed the colours of the MDX Page Base so both About contents and About Footer look good and are readable.

Broken Production

Well, here we get to our most interesting part of the week.

After getting rid of Gatsby and switching our main Front-End to Next.js, we found out that if we went directly to /search or /about pages directly, without going to the index page first, we would get a 404 error and infinite redirects. At the same time, when working with the development server, there was no such issue — it was only in production.

I decided to get on this issue, and our team initially thought that it was Nginx that needed to be rewritten to work with Next.js.

To be completely honest with you, I have no experience with Nginx whatsoever, and after reading through the docs and not having any clue where to start fixing this bug, I realized that I needed some help.

With this, I contacted The Father of Telescope — David Humphrey. Together we were able to resolve this bug.

What was it about?

So to start with, Nginx was not guilty. Well, it was, but it was just doing its job.

We started by recreating this bug on my machine. To do this, we had to build Next.js Front-End npm run build and serve it via npx http-server out,out is a directory that we have our build in. After going to localhost:8080/search and seeing that we get the same behaviour as in production, we were able to look deeper into it.

Apparently, Nginx was not an issue whatsoever. When we did a request to /search and /about, NGINX was not able to find those, because, in the static (built) version of the frontend that NGINX had, we have /search.js and /about.js, not /search/index.js and /about/index.js.
So NGINX was following this instruction: try_files $uri $uri/ @proxy; and, after not being able to find the needed pages, sent the request to the Backend. The Backend replied with an error handler, and because the Next.js app was not running (nobody started it yet) - the /error page request went back to Backend, and again, and again, and again.

To fix that, we added a single line of code that changed everything (This is probably my smallest fix that had so much significance).

Next.js build fix

In our next.config.js file we added this line: trailingSlash: true. What it did is it built our front-end in a way that every page of it has its own folder now, so instead of having /search.js we now have /search/index.js.

This way, when receiving a request for one of those pages, the Back-End will be able to call to one of the index.js files and actually render the Next.js Front-End, so it can handle any further requests to it and not cause infinite redirects.

That’s all for today, folks!

As I said before, I am working on the 3-part long read that will tell you about how I contributed to the Next.js Front-end for Telescope — so stay tuned!

Telescope Page: https://telescope.cdot.systems/
Telescope GitHub — We welcome all contributors! — https://github.com/Seneca-CDOT/telescope
My GitHub: https://github.com/izhuravlev
Chris: https://github.com/chrispinkney
David: https://github.com/humphd

Pst-pst

We at Telescope wanted to do a little rebranding and are working on merch — so I will definitely share some news about it soon!

by Ilya Zhuravlev at Tue Mar 02 2021 11:11:59 GMT+0000 (Coordinated Universal Time)

Saturday, February 27, 2021


Anton Biriukov

Weekly Telescope Podcast

With the imminence of midterms madness, this week was fairly quiet for most of the students on Telescope. Nevertheless, there has been important work going on in expanding our footsteps in the world of micro-services. We rolled out an impetuous switch the UI 2.0 and cleaned up after porting from Gatsby.

Rouge National Urban Park. Photo by Anton Biriukov

As for myself, I have primarily focused on investigating the changelog generation issue. Since our old changelog generator has not produced any results during the 1.7.0 release, and was well-hated by David for making us follow specific commit message formatting, its fate was clear. Even though I couldn’t fight my curiosity why it was failing, I could not get any clear answers. I have tried to reproduce the bug on my forked repository, and I achieved the same results. We have not changed any configurations or updated the action since our 1.6.0 release, so my best guess is that it probably just didn’t feel comfortable working under the pressure of everyone not in favour of being constrained to certain restrictions in commit messages…Either way, I have followed Chris’s commandments from our last release and scrapped it for good. Especially, when there was a new, more popular and better maintained alternative found, with a way richer configuration options. The action will create the changelog based on the pull requests and assign them to categories you specify in the configuration file based on labels. Here is an extract from the JSON configuration file I defined (kudos to Josue):

{
  "categories": [
    {
      "title": "## 🚀 Autodeployment",
      "labels": ["area: autodeployment"]
    },
    {
      "title": "## 🛠 Back-end",
      "labels": ["area: back-end"]
    },
...

As you can see, it is supposed to be fairly simple and you can put markdown in it as well. I am surprised no one’s judged my emoji choices yet! I have tested the new action on my forked repository and it seemed to be working fine. Lets see how it behaves for 1.8.0.

Apart from that, I have also taken time to look into micro-services and David’s introduction to containerization with Docker was very helpful for that. The Seneca’s SSO authentication flow that we have to use is quite a monster. Even though I wasn’t able to dig into every line of what Dave has written, his overview helped a lot to figure out the flow. Reviewing the code was somewhat painful, but reading clean code is a very nice exercise which every should attempt.

Lastly, I have also been monitoring our Dependabot behaviour this week. One downside of our current configuration is that the bot would only create one PR a week for each package file automatically. And it looks like we will most likely have more than one dependency update a week, which can bring us back to a cluttered outdated dependencies list. I think we should keep an eye on it…

by Anton Biriukov at Sat Feb 27 2021 23:16:16 GMT+0000 (Coordinated Universal Time)


Abdulbasid Guled

DPS911 Blog #7: The possible existence of unseen microbial life...AKA undiscovered Posts!

Make a donation, save a post life everyday. They deserve love, even if nobody will ever read them.

#PostLivesMatter

In today's blog post, we're currently in the middle of midterm week. That means, that I cannot focus on this class alone as I am getting destroyed by linux and cross-platform at the moment. With this in mind, I mentioned last week that I would be getting my post microservice up for review, and I did just that! You can find it here

Does it work? Of course not. It uses files that other services are currently using and should be in for review shortly. I can easily refactor those in and push another PR soon. Also, I'm very inexperienced with Docker, and the showcasing of it in today's review meeting shows that I still have alot to learn.

So, I'll be revising this throughout the weeks ahead and land this in the 1.8 release. I think as long as I can get the dependency services in, this one should be good to go.

In other news, I landed a fix that allows Telescope to process iframe tags that include spotify playlists. You can find that PR here. This PR also updates the test cases so that they work as well. This was a rather quick but annoying fix since I had to actually read html tags which is something that I need to get better at. Also, reading the backend code was a nice change of pace.

Today marks the beginning of Reading Week though! A beautiful time in life where we can to focus on stuff other than school for some time in our life

I won't be slacking off though. Just, taking my sweet, sweet time.

Next time, Telescope 1.8 releases! My big post microservice should hopefully land soon. You all are going to love it! Until then, stay tuned!

by Abdulbasid Guled at Sat Feb 27 2021 02:14:15 GMT+0000 (Coordinated Universal Time)


Chris Pinkney

Insert catchy title here!

I let this week get away from me due to all the chaos that comes from midterms. Ah well, so much for inbox 0.

Without further ado (i.e. excuses,) let's begin with what went down this week on the admin side of Telescope.

​ On Wednesday it was discovered that our new Telescope builds were failing to deploy to our staging server despite our CI being all green across the board:

Copying API_URL=https://dev.telescope.cdot.systems to NEXT_PUBLIC_API_URL
Using NEXT_PUBLIC_API_URL=https://dev.telescope.cdot.systems
info  - Creating an optimized production build...

Failed to compile.

./src/components/Posts/Post.tsx:101:54
Type error: Argument of type '{ month: string; day: string; year: string; }' is not assignable to parameter of type 'DateTimeFormatOptions'.
  Types of property 'year' are incompatible.
    Type 'string' is not assignable to type '"numeric" | "2-digit" | undefined'.
   99 |   const date = new Date(dateString);
  100 |   const options = { month: 'long', day: 'numeric', year: 'numeric' };
> 101 |   const formatted = new Intl.DateTimeFormat('en-CA', options).format(date);
      |                                                      ^
  102 |   return `Last Updated ${formatted}`;
  103 | };
  104 | 

The failing file was Post.tsx, which strangely enough, was not touched for about 10 days. The builds were building just fine otherwise, it wasn't until we tried to build Telescope using Docker that we discovered the failure.

The fix was even stranger (old vs fix):

  const date = new Date(dateString);
  const options = { month: 'long', day: 'numeric', year: 'numeric' };
  const formatted = new Intl.DateTimeFormat('en-CA', options).format(date);

vs.

  const date: Date = new Date(dateString);
  const formatted = new Intl.DateTimeFormat('en-CA', {
    month: 'long',
    day: 'numeric',
    year: 'numeric',
  }).format(date);

I laughed about it with Josue before approving the fix 🤷 The lesson: have your CI deploy using the same tech as your prod server. Our CI does not build, run , and test using Docker and that's where the disconnect came from. One failed using Docker, and one did not.

Thanks Doc Josue!

​ I also minorly reviewed Tony's microservice. I didn't end up testing it locally, it's on my list of things to do tomorrow after work. It's great to see another microservice popping up that I can look at and borrow (steal) ideas from. Currently looking forward to more of this, plus Abdul's and Professor Dave's stuff.

User Microservice Update

Several updates happened this week to my microservice:

  • I finally got Firebase to work offline.
    • It took a lot of trial and error, and blog posts mostly. I'm still not that great at reading technical instruction documentation.
  • This means I can implement testing!
    • Since the db can now be manipulated offline, I can add and delete data to ensure that when we use the actual production version of the db, all our code that we've developed will work as intended.
    • This also means that I can now start writing unit tests to ensure a solid relationship between Express and Firestore.
    • The whole experience of this two pronged attack (offline mode + testing) has been really enlightening, it really displayed how quickly a prototype can be created, demoed, and implemented using some basic tooling. It doesn't matter if your db is on or offline, if you can test it then the code you created is guaranteed to work on a production version (sans outside interference from other tools like nginx, firewalls, etc.)
    • I added several basic unit-tests using Supertest (after failing an initial attempt using Got)

I also got some really great feedback and direction from a little demo I presented to Professor X. Let's take a look (and laugh) at this week's goals from last week's blog post:

  1. Get testing working with the emulator and Jest Done!
  2. Implement Satellite
  3. Create some basic README.md files
  4. Export my private key to .env (was having trouble with this last week) Not possible in this form
  5. Figure out how to utilize my User class/schema Will likely be deleted
  6. Create an update route
  7. Create a delete route
  8. Add date/time users were created (mostly for funsies)
  9. Add date/time users were updated (again, for funsies)

Here is the updated todo list that I want to get done between now and our next big release (1.8 - March 12th 2021):

  1. Finish up making some more tests, specifically ones that test Celebrate's validation rules
  2. Migrate to https://www.npmjs.com/package/@firebase/rules-unit-testing because @firebase/testing is deprecated
  3. Create a basic README.md file
  4. Implement a list of current deficiencies and issues to discuss when the draft PR goes up
  5. Implement Satellite
  6. Dockerize the microservice
  7. Add Firestore private key placeholders to env.production and env.staging
  8. Tidy up the package.json run commands using npm-run-all
  9. Create an update (put) route
  10. Create a delete route
  11. Add date/time users were created (mostly for funsies)
  12. Add date/time users were updated (again, for funsies)
  13. Migrate unnecessary package.json dependencies to dev-dependencies
  14. Finally, create a draft PR!

I think that about does it for me this week. Hoping to get some more done next blog post to show off.

Other news:

  • I got an Aeropress on sale this week and some ground coffee at a local coffee shop, I'll never go back to my Kureig.
  • I think midterm week went okay for me. 🤞
  • I'm participating in Seneca's hackathon next week! It's my first and I'm hoping it'll be a lot of fun.

by Chris Pinkney at Sat Feb 27 2021 01:52:30 GMT+0000 (Coordinated Universal Time)


Royce Ayroso-Ong

Working on the 1.8 Release

Status report: full steam ahead

Photo by Richard Horne on Unsplash

Hey everyone! To recap what I was planning to do for the Telescope 1.8 release in my last blog, two specific things I wanted to take on were working on making the DynamicImage component fully dynamic and updating the ‘help’ UI for the search page.

For the first issue, above is what I started with — and as you can see the DynamicImage is simply just an image within a picture tag and some CSS above. What I really need to implement here is the randomness of the background photo and allow it to change and transition to other photos. For reference, here is what the Gatsby version of the DynamicImage component looks like (take note of the random generator):

One thing that has been troubling me was the Gatsby syntax and structure for rendering React pages, though with some more trial and error I think I can replicate the original Telescope look. If I successfully pull this off in time I can even put in an issue to re-add the DynamicImage component to the error page — something I removed because it wouldn't scale properly on mobile.

As for the ‘help’ UI issue, I don’t think that will be too big of a problem considering that it was quite well designed and planned out, all I need to do is replicate it in CSS. Moreover, there are more issues to address over the coming weeks that involve more work to be done in the new search page UI and some other UI bugs.

As a final side note, our Telescope team just had a crash course in Docker and wow! What a powerful tool that I definitely want to try and get involved in once this UI stuff has been handled.

by Royce Ayroso-Ong at Sat Feb 27 2021 01:52:08 GMT+0000 (Coordinated Universal Time)


Mohammed Ahmed

Remove Spaces & Clear Stuff in Paragraphs

Working on removing spaces was quite an interesting task. I’m still very new at JavaScript, so I thought that this would be a cool task to do!

So, I had to make a module and test module for removing spaces, easy, right? Kinda. It was really easy to make the module that removes the spaces, but making the test was a new task for me.

The test cases for Empty Paragraphs

This was me writing the tests. Dave made a really good point about using a node package called “clean-whitespace”, it removes quite literally anything that looks like a space. So, I implemented this, and then remove all the spaces right after as a means to sanitize. It worked on my machine, so I made a PR on it.

The code to clean the whitespace

So, now that I’ve got the code, and the test to work, it passed everything through our CI-CD’s and it got merged! Yay!

by Mohammed Ahmed at Sat Feb 27 2021 01:15:16 GMT+0000 (Coordinated Universal Time)

Friday, February 26, 2021


Mohammed Ahmed

OpenTTC — The Pains of Predicting the Train

OpenTTC — The Pains of Predicting the Train

Last post, I’ve talked about the troubles of First & Last train arrival times, due to the lack of Subway data that the TTC neglects to publish. Now, this is an expansion on another issue that has been brought up while developing the Subway API.

Last post, I’ve talked about the troubles of tracking the First/Last train arrivals inside the TTC. Now, I’ll be going over the issues of train predictions for a chosen Subway Station.

Let’s look at Eglinton West station again; the first trains arrive at 6am and 6:22am respectfully. Now, since we know when the trains come by, we can just go by 2 minute intervals, right? Well, yes and no. The answer is yes at times, when we know that the TTC is running without any service disruptions. Now, this is Toronto. Running subway service without any service disruptions is like winning the lottery, twice, in a row. It’s near impossible.

So, what can I do? Well, actually there’s a simple solution that I can do. Instead of trying to predict something that you barely have data on, it’s better to have a property that stores any service disruption for that specific station, rather than predicting when the next train will arrive during the disruption. Why? Cause we really don’t know when the train will arrive. Yeah, there’s a disruption, but without any concrete data, we can’t assume.

Implementation shouldn’t be too hard, plus I get to learn something new!

by Mohammed Ahmed at Fri Feb 26 2021 18:17:07 GMT+0000 (Coordinated Universal Time)