Planet Sugar

Planet Sugar is a collection of personal blogs by Sugar Labs contributors. Sugar Labs is a world-wide organization of passionate people working together to solve the same problem: giving everyone an opportunity to learn to learn. Our community members write about what excites them about learning, Sugar, and the Sugar community. In the spirit of free software, we share and criticize—that is how we learn and improve and encourage participation by newcomers. Enjoy and join the conversation.

June 16, 2019

Karma Project

Concept And Thoughts Of Institutionalized State (PANCASILA)

Educational Philosophy and Idea is a prestigious international journal, publishing articles involved with all features of academic philosophy. Although Rousseau never intended these instructional details to be taken actually as a blueprint (he noticed himself as creating and illustrating the essential rules), over the ages there have been attempts to implement them, one being the famous British free school”, A.S. Neill’s Summerhill (cf.

The very nature of philosophy, on the other hand, is basically contested”; what counts as a sound philosophical work within one college of thought, or socio-cultural or tutorial setting, might not be so regarded (and should even be the main focus of derision) in a distinct one.philosophy of education

No one particular person can have mastered work performed by such a range of figures, representing as they do plenty of fairly completely different frameworks or approaches; and relatedly nobody individual stands as emblematic of the whole area of philosophy …

by jasmine at June 16, 2019 12:54 AM

June 13, 2019

Karma Project

Continuing Training By way of College Of Wisconsin

APA gives continuing education schemes for psychologists and other psychological health professionals. Throughout the area of continuous schooling, skilled persevering with schooling is a selected studying exercise generally characterized by the issuance of a certificate or continuing training models (CEU) for the purpose of documenting attendance at a delegated seminar or course of instruction.

Whether or not you’re seeking to advance your career or jumpstart your online business, pursue studying as an end in itself or as a automobile that can assist you make a difference in the lives of others, Penn has a program that can allow you to achieve the knowledge it’s essential get there.continuing education

Mission: San Diego Persevering with Schooling commits to scholar success and group enrichment by offering accessible, equitable, and innovative high quality education and help companies to various grownup learners in pursuit of lifelong studying, training, profession development, and pathways to varsity.

CDC PREPARE: …

by jasmine at June 13, 2019 09:03 PM

May 13, 2019

One Laptop per Child

Saint-Ouen deployment of Sugarizer OS

OLPC France has deployed Samsung Galaxy Tab A 2018 tablets with Sugarizer OS at the Mendela school. For more details, see https://wiki.sugarlabs.org/go/Sugarizer_Saint-Ouen_deployment

by James Cameron at May 13, 2019 06:36 AM

May 03, 2019

One Laptop per Child

Turtle Blocks all the way Down Under

Learners at Tooraweenah Public School in outback Australia being taught how to use OLPC XO-4 with Turtle Blocks. Their task was to use the turtle to draw shapes, like squares, and hexagons.

Part of a weekly visit by a volunteer technology teacher and OLPC’s CTO.

by James Cameron at May 03, 2019 06:32 AM

April 25, 2019

Daniel Drake

April 23, 2019

Daniel Drake

April 17, 2019

OLE Nepal

Echoes of Darchula

– Subash Parajuli, Teaching Resident for Darchula 2018-19 “While I decided to go to Darchula for the ‘Teaching with Technology Residency Program’, my excitement was mostly about visiting a new place in the remote far-western corner of Nepal. The only thing I knew about Darchula was that it was a district that shared its borders with both India and China. There was not much expectation, except getting to explore a new part of our country.…

by admin at April 17, 2019 12:08 PM

March 05, 2019

Tomeu Vizoso

Panfrost update: a new kernel driver

The video

Below you can see the same scene that I recorded in January, which was rendered by Panfrost in Mesa but using Arm's kernel driver. This time, Panfrost is using a new kernel driver that is in a form close to be acceptable in the mainline kernel:

The history behind it

During the past two months Rob Herring and I have been working on a new driver for Midgard and Bifrost GPUs that could be accepted mainline.

Arm already maintains a driver out of tree with an acceptable open source license, but it doesn't implement the DRM ABI and several design considerations make it unsuitable for inclusion in mainline Linux.

The absence of a driver in mainline prevents users from keeping their kernels up-to-date and hurts integration with other parts of the free software stack. It also discourages SoC and BSP vendors from submitting their code to mainline, and hurts their ability to track mainline closely.

Besides the code of the driver itself, there's one more condition for mainline inclusion: an open source implementation of the userspace library needs to exist, so other kernel contributors can help verifying, debugging and maintaining the kernel driver. It's an enormous pile of difficult work to reverse engineer the inner workings of a GPU and then implement a compiler and command submission infrastructure, so big thanks to Alyssa Rosenzweig for leading that effort.

Upstream status

Most of the Panfrost code is already part of mainline Mesa, with the code that directly interacts with the new DRM driver being in the review stage. Currently targeted GPUs are T760 and T860, with the RK3399 being the SoC more often used for testing.

The kernel driver is being developed in the open and though we are trying to follow the best practices as displayed by other DRM drivers, there's a number of tasks that need to be done before we consider it ready for submission.

The work ahead

In the kernel:
- Make MMU code more complete for correctness and better performance
- Handle errors and hangs and correctly reset the GPU
- Improve fence handling
- Test with compute shaders (to check completeness of the ABI)
- Lots of cleanups and bug fixing!

In Mesa:
- Get GNOME Shell working
- Get Chromium working with accelerated WebGL
- Get all of glmark2 working
- Get a decent subset of dEQP passing and use it in CI
- Keep refactoring the code
- Support more hardware

Get the code

The exact bits used for the demo recorded above are in various stages of getting upstreamed to the various upstreams, but here are in branches for easier reproduction:


by Unknown (noreply@blogger.com) at March 05, 2019 06:33 AM

January 07, 2019

Tomeu Vizoso

A Panfrost milestone

The video

Below you can see glmark2 running as a Wayland client in Weston, on a NanoPC -T4 (so a RK3399 SoC with a Mali T-864 GPU)). It's much smoother than on the video, which is limited to 5FPS by the webcam.


Weston is running with the DRM backend and the GL renderer.

The history behind it


For more than 10 years, at Collabora we have been happily helping our customers to make the most of their hardware by running free software.

One area some of us have specially enjoyed working on has been open drivers for GPUs, which for a long time have been considered the next frontier in the quest to have a full software platform that companies and individuals can understand, improve and fix without having to ask for permission first.

Something that has saddened me a bit has been our reduced ability to help those customers that for one reason or another had chosen a hardware platform with ARM Mali GPUs, as no open driver was available for those.

While our biggest customers were able to get a high level of support from the vendors in order to have the Mali graphics stack well integrated with the rest of their product, the smaller ones had a much harder time in achieving that level of integration, which manifested in reduced performance, increased power consumption and slipped milestones.

That's why we have been following with great interest the several efforts that aimed to come up with an open driver for GPUs in the Mali family, one similar to those already existing for Qualcomm, NVIDIA and Vivante.

At XDC last year we had the chance of meeting the people involved in the latest effort to develop such a driver: Panfrost. And in the months that followed I made some room in my backlog to come up with a plan to give the effort a boost.

At that point, Panfrost was only able to get its bits in the screen by an elaborate hack that involved copying each frame into a X11 SHM buffer, which besides making the setup of the development environment much more cumbersome, invalidated any performance analysis. It also limited testing to demos such as glmark2.

Due to my previous work on Etnaviv I was already familiar with the abstractions in Mesa for setups in which the display of buffers is performed by a device different from the GPU so it was just a matter of seeing how we could get the kernel driver for the Mali GPU to play well with the rest of the stack.

So during the past month or so I have come up with a proper implementation of the winsys abstraction that makes use of ARM's kernel driver. The result is that now developers have a better base on which to work on the rendering side of things.

By properly creating, exporting and importing buffers, we can now run applications on GBM, from demos such as kmscube and glmark2 to compositors such as Weston, but also big applications such as Kodi. We are also supporting zero-copy display of GPU-rendered clients in Weston.

This should make it much easier to work on the rendering side of things, and work on a proper DRM driver in the mainline kernel can proceed in parallel.

For those interested in joining to the effort, Alyssa has graciously taken the time to update the instructions to build and test Panfrost. You can join us at #panfrost in Freenode and can start sending merge requests to Gitlab.

Thanks to Collabora for sponsoring this work and to Alyssa Rosenzweig and Lyude Paul for their previous work and for answering my questions.

by Unknown (noreply@blogger.com) at January 07, 2019 12:33 PM

November 21, 2018

OLE Nepal

Meet our Darchula Teaching Residents for 2018-19

OLE Nepal’s Teaching in Technology Residency program has been receiving rave reviews from school teachers, local communities and children. Each year, OLE Nepal trains young graduates to support the newly launched laptop program schools in the far western districts of Nepal. Ever since it was introduced 5 years ago, many young graduates have travelled to remote communities in these districts where they live for months working with schools and communities so that they can use…

by admin at November 21, 2018 05:09 AM

September 07, 2018

sam.today

Derivations 102 - Learning Nix pt 4

This guide will build on the previous three guides, and look at creating a wider variety of useful nix packages.

Nix is built around the concept of derivations. A derivation is simply defined as "a build action". It produces 1 (or maybe more) output paths in the nix store.

Basically, a derivation is a pure function that takes some inputs (dependencies, source code, etc.) and makes some output (binaries, assets, etc.). These outputs are referenceable by their unique nix-store path.

Derivation Examples

It's important to note that literally everything in NixOS is built around derivations:

  • Applications? Of course they are derivations.
  • Configuration files? In NixOS, they are a derivation that takes the nix configuration and outputs an appropriate config file for the application.
  • The system configuration as a whole (/run/current-system)?
sam@vcs ~> ls -lsah /run/current-system
0 lrwxrwxrwx 1 root root 83 Jan 25 13:22 /run/current-system -> /nix/store/wb9fj59cgnjmkndkkngbwxwzj3msqk9c-nixos-system-vcs-17.09.2683.360089b3521

It's a symbolic link to a derivation!

It's derivations all the way down.

If you've followed this series from the beginning, you would have noticed that we've already made some derivations. Our nix-shell scripts are based off having a derivation. When packaging a shell script, we also made a derivation.

I think it is easiest to learn how to make a derivation through examples. Most packaging tasks are vaguely similar to packaging tasks done in the past by other people. So this will be going through example of using mkDerivation.

mkDerivation

Making a derivation manually requires fussing with things like processor architecture and having zero standard build-inputs. This is often not necessary. So instead, NixPkgs provides a function function stdenv.mkDerivation; which handles the common patterns.

The only real requirement to use mkDerivation is that you have some folder of source material. This can be a reference to a local folder, or something fetched from the internet by another nix function. If you have no source, or just 1 file; consider the "trivial builders" covered in part three of this series

mkDerivation does most a lot of work automatically. It divides the build up into "phases", all of which include a little bit of default behaviour - although it is usually unintrusive or can be can be overridden. The most important phases are:

  1. unpack: unzips, untarz, or copies your source folder to the nix store
  2. patch: applies any patches provided in the patches variable
  3. configure: runs ./configure if it exists
  4. build: runs make if it exists
  5. check: skipped by default
  6. install: runs make install
  7. fixup: automagically fixes up things that don't jell with the nix store; such as using incorrect interpreter paths
  8. installCheck: runs make installcheck if it exists and is enabled

You can see all the phases in the docs. But with a bit of practice from the examples below you'll likely get the feel for how this works quickly.

Example #1: A static site

Nix makes writing packages really easy; and with NixOps (which we'll learn later) Nix derivations are automagiaclly built and deployed.

First we need to answer the question of how we would build the static site ourself. This is a jekyll site, so you'd run the jekyll command

with import <nixpkgs> {};

stdenv.mkDerivation {
  name = "example-website-content";

  # fetchFromGitHub is a build support function that fetches a GitHub
  # repository and extracts into a directory; so we can use it
  # fetchFromGithub is actually a derivation itself :)
  src = fetchFromGitHub {
    owner = "jekyll";
    repo = "example";
    rev = "5eb1b902ca3bda6f4b50d4cfcdc7bc0097bac4b7";
    sha256 = "1jw35hmgx2gsaj2ad5f9d9ks4yh601wsxwnb17pmb9j02hl3vgdm";
  };
  # the src can also be a local folder, like:
  # src = /home/sam/my-site;

  # This overrides the shell code that is run during the installPhase.
  # By default; this runs `make install`.
  # The install phase will fail if there is no makefile; so it is the
  # best choice to replace with our custom code.
  installPhase = ''
    # Build the site to the $out directory
    export JEKYLL_ENV=production
    ${pkgs.jekyll}/bin/jekyll build --destination $out
  '';
}

Now we can see that this derivation builds the site. If you save it to test.nix, you can trigger a build by running:

> nix-build test.nix
/nix/store/b8wxbwrvxk8dfpyk8mqg8iqhp7j2c9bs-example-website-content

The path printed by nix-build is where $out was in the Nix store. Your path might be a little different; if you are running a different version of NixPkgs, then the build inputs are different.

We can see the site has built successfully by entering that directory:

> ls /nix/store/b8wxbwrvxk8dfpyk8mqg8iqhp7j2c9bs-example-website-content
2014  about  css  feed.xml  index.html  LICENSE  README.md

Using the content

We can then use that derivation as a webroot in a nginx virtualHost. If you have a server, you could add the following to your NixOS configuration:

let
  content = stdenv.mkDerivation {
  name = "example-website-content";

    ... # code from above snipped
  }
in
  services.nginx.virtualHosts."example.com" = {
    locations = {
      "/" = {
        root = "${content}";
      }
    }
  };

So how does this work? Ultimately, the "root" attribute needs to be set to the output directory of the content derivation.

Using the "${content}" expression, we force the derivation to be converted to a string (remembering ${...} is string interpolation syntax). When a derivation is converted to a string in Nix, it becomes the output path in the Nix store.

If you don't have a server handy, we can use the content in this a simple http server script:

# server.nix
with import <nixpkgs> {};

let
  content = stdenv.mkDerivation {
    name = "example-website-content";

    src = fetchFromGitHub {
      owner = "jekyll";
      repo = "example";
      rev = "5eb1b902ca3bda6f4b50d4cfcdc7bc0097bac4b7";
      sha256 = "1jw35hmgx2gsaj2ad5f9d9ks4yh601wsxwnb17pmb9j02hl3vgdm";
    };

    installPhase = ''
      export JEKYLL_ENV=production
      # The site expects to be served as http://hostname/example/...
      ${pkgs.jekyll}/bin/jekyll build --destination $out/example
    '';
  };
in
let
  serveSite = pkgs.writeShellScriptBin "serveSite" ''
    # -F = do not fork
    # -p = port
    # -r = content root
    echo "Running server: visit http://localhost:8000/example/index.html"
    # See how we reference the content derivation by `${content}`
    ${webfs}/bin/webfsd -F -p 8000 -r ${content}
  '';
in
stdenv.mkDerivation {
  name = "server-environment";
  # Kind of evil shellHook - you don't get a shell you just get my site
  shellHook = ''
    ${serveSite}/bin/serveSite
  '';
}

Then run nix-shell server.nix, you'll then start the server and can view the site!

Example #2: A more complex shell app

We've already talked a lot about shell scripts. But sometimes whole apps get built in shell scripts. One such example is emojify, a CLI tool for replacing words with emojis.

We can make a derivation for that. All we need to do is copy the shell script into the PATH, and mark it as executable.

If we were writing the script ourself, we'd need to pay special attention to fixing up dependencies (such as changing /bin/bash to a Nix store path). But mkDerivation has the fixup phase, which does this automatically. The defaults are smart, and in this case it works perfectly.

It is quite simple to write a derivation for a shell script.

with import <nixpkgs> {};

let
  emojify = let
    version = "2.0.0";
  in
    stdenv.mkDerivation {
      name = "emojify-${version}";

      # Using this build support function to fetch it from github
      src = fetchFromGitHub {
        owner = "mrowa44";
        repo = "emojify";
        # The git tag to fetch
        rev = "${version}";
        # Hashes must be specified so that the build is purely functional
        sha256 = "0zhbfxabgllpq3sy0pj5mm79l24vj1z10kyajc4n39yq8ibhq66j";
      };

      # We override the install phase, as the emojify project doesn't use make
      installPhase = ''
        # Make the output directory
        mkdir -p $out/bin

        # Copy the script there and make it executable
        cp emojify $out/bin/
        chmod +x $out/bin/emojify
      '';
    };
in
stdenv.mkDerivation {
  name = "emojify-environment";
  buildInputs = [ emojify ];
}

And see it in action:

> nix-shell test.nix

[nix-shell:~]$ emojify "Hello world :smile:"
Hello world 😄

Example #3: The infamous GNU Hello example

If you've ever read anything about Nix, you might have seen an example of making a derivation for GNU Hello. Something like this:

with import <nixpkgs> {};

let
  # Let's separate the version number so we can update it easily in the future
  version = "2.10";

  # Now define the derivation for the app
  helloApp = stdenv.mkDerivation {
    # String interpolation to include the version number in the name
    # Including a version in the name is idiomatic
    name = "hello-${version}";

    # fetchurl is a build support again; and does some funky stuff to support
    # selecting from a predefined set of mirrors
    src = fetchurl {
      url = "mirror://gnu/hello/hello-${version}.tar";
      sha256 = "0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i";
    };

    # Will run `make check`
    doCheck = true;
  };
in
# Make an environment for nix-shell
stdenv.mkDerivation {
  name = "hello-environment";
  buildInputs = [ helloApp ];
}

You can build and run this:

> nix-shell test.nix

[nix-shell:~]$ hello
Hello, world!

Ultimately this is a terrible and indirect example. This doesn't explicitly specify anything that the builder will actually run! It really confused me when I was learning Nix.

To understand it, we need to remember the default build phases from stdenv.mkDerivtion. From above, we had a list of the most important phases. If we annotate the defaults with what happens in the case of GNU Hello, things start to make sense:

Phase Default Behaviour Behaviour with GNU Hello
1 unpack unzips, untarz, or copies your source folder to the nix store the source is a tarball, so it is automatically extracted
2 patch applies any patches provided in the patches variable nothing happens
3 configure runs ./configure if it exists runs ./configure
4 build runs make if it exists runs make, the app is built
5 check skipped by default we turn it on, so it runs make check
6 install runs make install runs make install

Since GNU Hello uses Make & ./configure, the defaults work perfectly for us in this case. That is why this GNU Hello example is so short!

Your Packing Future

While it's amazing to use mkDerivation (so much easier than an RPM spec), there are many cases when you should not use mkDerivation. NixPkgs contains many useful build support functions. These are functions that return derivations, but do a bit of the hard work and boilerplate for you. These make it easy to build packages that meet specified criteria.

We've seen a few build support today; such as fetchFromGitHub or fetchurl. These just functions that return derivations. In these cases, they return derivations to download and extract the source files.

For example, there is pkgs.python36Packages.buildPythonPackage, which is a super easy way to build a python package.

When making packages, there are helpful resources to check:

Up Next

In part 5, we'll learn about functions in the Nix programming language. With the knowledge of functions, we can write go on and write our own build support function!

Follow the series on GitHub

Hero image from nix-artwork by Luca Bruno

September 07, 2018 12:38 PM

August 15, 2018

OLPC Learning Club

Kryptowährungen für Zahlungen

Verschlüsselter E-Mail-Service Tutonata testet Kryptowährung für Zahlungen Tutanota akzeptiert nun Bitmünze und einige Altmünzen. Anzeige Tutanota, ein Anbieter eines verschlüsselten E-Mail-Dienstes, hat damit begonnen, Spenden in bitcoin, ether, bitcoin cash und monero entgegenzunehmen, um die Zahlungsabwicklung mit Krypto-Währungen zu testen, … Continue reading

by admin at August 15, 2018 09:11 AM

August 11, 2018

OLPC Learning Club

Was ist der wahre Wert von Krypto-Währungen?

Dies ist der erste in einer Reihe von Beiträgen, die die Quellen von Werten in Krypto-Währungen identifizieren. Der jüngste Trend bei den Münzangeboten, einschließlich des massiven Anstiegs von Tezos, hat die Krypto-Währungen ins Rampenlicht gerückt. Die Bewertung der weltweit führenden … Continue reading

by admin at August 11, 2018 09:06 AM

July 19, 2018

Mihaela Sabin

Four years later…

In a courageous attempt to restart blogging, I’m cautiously writing my first blog in four years. Minimally. It’s mid-July, when academics contemplate the growing pile of projects that only summer time could bring hopes for completion. Personal deadlines compete with hard deadlines for conference and journal submissions. Fabulous ideas for new teaching strategies planned for […]

by Mihaela Sabin at July 19, 2018 03:46 PM

July 12, 2018

OLPC San Francisco blogs

Ethiopia project: An update

Andreas Gros, who had presented about his upcoming Ethiopia project has now made a trip to Addis Ababa and back. This project has several OLPC NL3 laptops and multiple School Servers. He will be sharing his updates and experiences with us about this project. Please join us!

RSVP on Eventbrite

Update: Andi's slides are posted here. The recoding is also up on YouTube here.

by sverma at July 12, 2018 05:00 AM

April 18, 2018

OLPC San Francisco blogs

Community Summit = Open Hack 2018

Registration is open. Register here.

 

This year's event is a little different. We are joining forces with some of the other projects in the Commons space at San Francisco State University. This year's event is called Open Hack 2018.

 
openhack 2018
 
The event is largely scheduled to run on Saturday (April 28) and Sunday (April 29). Will have a meet-and-greet event on Friday evening (April 27), but the main event will begin on Saturday (April 28).
 
The format is as follows:
 
  • What we typically refer to as "projects" are called "Challenges" in this format. Anybody can Submit a Challenge. When submitting the Challenge, you have to provide us with information about the Challenge itself, existing resources, people involved, and the kinds of skills that you may find helpful in completing this challenge. The goal is that we would want to complete some degree of the Challenge by the time we get to Sunday afternoon.
  • The Challenges will be printed and posted up on the wall starting Friday (April 27). On Saturday morning (April 28), people who come in will assign themselves to different challenges. It's quite common for some of the challenges to not have any interested people. That's okay.
  • As we start to see a cluster of people collecting on a given Challenge, we will allocate a room for them and then that room becomes their space for the next day and a half. Unlike in the past, where we had timed sessions (typically 75 minutes), these groups get to work on their problem for the entire day Saturday and half day Sunday.
  • On Sunday afternoon, they present their progress and future direction. The work (code, content, etc) will have to be made available some place (a repo such as github) via a FOSS, or CreativeCommons, or OpenData license.
  • After the presentations, a panel of judges will determine some form of ranking. There may also be some token prizes.
This is somewhat different from what we've done in the past, but given the level of maturity in our projects, and the amount of focus that is needed to work on fixing bugs and building upon what we already have, the hackathon approach seems to be more apt than simple presentations. If you have somebody in mind who cannot be there physically, you can always bring them in online. The rooms are fairly well equipped, with whiteboards, projectors and Internet access. We are also in the process of arranging for other operational logistics.
 
In the time being, take a look at the information here, the code of conduct here and submit a challenge.
 

Registration is open. Register here.

 
Also, let us know if you plan to attend, so we can look out for other arrangements as well, as necessary.
 
Sameer Verma: sverma@sfsu.edu
Aaron Borden: adborden@live.com

 

by sverma at April 18, 2018 01:59 AM

February 09, 2018

Mel Chua

Getting the radical realtime transparency ball rolling

Getting radical realtime transparency in a project can be slow and frustrating, especially in the beginning. Most folks don’t know this, but in order to have public conversations, leaders need to send out a ridiculous number of private messages to get things rolling. In fact, looking at my own inbox history for the past half-decade, I’ve sent anywhere between 2-20 private messages – on average  (not maximum, average) – to get a single public message during the early stages of a project’s “open” life.

You really need to keep poking people in private asking them to put their messages public. It’s thankless and invisible work. It takes a while to build a new cultural habit, and for a while it’s going to seem like you’ll be doing this forever… but trust me, it will come. It’s going to take longer than you want it to, it’s going to take an unexpected route, but keep the faith – it will come.

There are three strategies it’s useful to have up your sleeve for times like this.

Start the conversation in private, then say something like “hey, this is really good, could you resend it to the public list and I’ll reply there?” This is good for starters if folks are new to the “default to open” concept and are reacting with great nervousness. This nervousness stems from wariness that they may not want to go public with some hypothetical future thing – in effect, worrying about a problem that hasn’t happened yet. Going this route allows beginners in radical transparency to look at something they’ve already written and assess the risk for only that specific situation – no unknowns here, no future commitments. After a few times of going “oh, I guess that retroactive transparency was okay!” it’s much easier to ask people to give “open by default” a chance.

Publicly announce that you’ll only respond to things sent to the public list. Reply to private emails with a reminder of this. This only works only if the people you’re trying to persuade are unable to route around you. It’s also a bit of a strongarm tactic, not appropriate for all situations and best used in moderation if at all. But if you’re a project manager, or an instructor, or a senior engineer, or something of the sort, you might be able to get away with it – and boy, folks learn fast this way.

Get others to help you with the nudges-to-public. Those 20 private emails to get a single public email? No reason why you’ve got to be the only one doing it. Train others to become Agents of Transparency as soon as you can, especially if they were once on the other side of the conversation. To begin with, ask them to work specific mailing lists, specific people, or specific conversation threads into the public eye – coach them from behind if needed. After a little while, they’ll be able to do it on their own – then just ask them to keep an eye out in general, and hey presto!

The key thing to keep in mind is that this is an investment. You’re putting resources into something that may not see returns for a little while. But the returns will come, and they’ll be worth it – when a project tips over into living, breathing, and practicing true realtime transparency, the results of the culture shift can be stunningly refreshing.

Keep going.

by Mel at February 09, 2018 11:42 PM

February 01, 2018

sam.today

Creating a super simple derivation - Learning Nix pt 3

This guide will build on the previous two guides, and look at creating your first useful derivation (or "package").

This will teach you how to package a shell script.

Packaging a shell script (with no dependencies)

We can use the function pkgs.writeShellScriptBin from NixPkgs, which handles generating a derivation for us.

This function takes 2 arguments; what name you want the script to have in your PATH, and a string being the contents of the script.

So we could have:

pkgs.writeShellScriptBin "helloWorld" "echo Hello World"

That would create a shell script named "helloWorld", that printed "Hello World".

Let's put that in an environment; so we can use it in nix-shell. Write this to test.nix:

with import <nixpkgs> {};

let
  # Use the let-in clause to assign the derivation to a variable
  myScript = pkgs.writeShellScriptBin "helloWorld" "echo Hello World";
in
stdenv.mkDerivation rec {
  name = "test-environment";

  # Add the derivation to the PATH
  buildInputs = [ myScript ];
}

We can then enter the nix-shell and run it:

sam@vcs ~> nix-shell test.nix

[nix-shell:~]$ helloWorld
Hello World

Great! You've successfully made your first package. If you use NixOS, you can modify your system configuration and include it in your environment.systemPackages list. Or you can use it in a nix-shell (like we just did). Or whatever you want! Despite being one line of code, this is a real Nix derivation that we can use.

Referencing other commands in your script

For this example/section; we are going to look at something more complex. Say you want to write a script to find your public IP address. We're basically going to run this command:

curl http://httpbin.org/get | jq --raw-output .origin

But running this requires dependencies; you need curl and jq installed. How do we specify dependencies in Nix?

Well, we could just add them to the build input for the shell:

# DO NOT USE THIS; this is a BAD example
with import <nixpkgs> {};

let
  # This is the WORST way to do dependencies
  # We just specify the derivation the same way as before
  simplePackage = pkgs.writeShellScriptBin "whatIsMyIp" ''
    curl http://httpbin.org/get | jq --raw-output .origin
  '';
in
stdenv.mkDerivation rec {
  name = "test-environment";

  # Then we add curl & jq to the list of buildInputs for the shell
  # So curl and jq will be added to the PATH inside the shell
  buildInputs = [ simplePackage pkgs.jq pkgs.curl ];
}

This would work OK; you could go nix-shell then run whatIsMyIp and get your IP.

But it has a problem. The script would work unpredictably. If you took this package, and used it outside of the nix-shell, it wouldn't work - because you didn't have the dependencies. It also pollutes the environment of the end user; as they need to have a compatible version jq and curl in their path.

The more eloquent way to do this is to reference the exact packages in the shell script:

with import <nixpkgs> {};

let
  # The ${...} is for string interpolation
  # The '' quotes are used for multi-line strings
  simplePackage = pkgs.writeShellScriptBin "whatIsMyIp" ''
    ${pkgs.curl}/bin/curl http://httpbin.org/get \
      | ${pkgs.jq}/bin/jq --raw-output .origin
  '';
in
stdenv.mkDerivation rec {
  name = "test-environment";

  buildInputs = [ simplePackage ];
}

Here we reference the dependency package inside the derivation. To understand what this is doing, we need to see what the script is written to disk as. You can do that by running:

sam@vcs ~> nix-shell test.nix

[nix-shell:~]$ cat $(which whatIsMyIp)

Which gives us:

#!/nix/store/hqi64wjn83nw4mnf9a5z9r4vmpl72j3r-bash-4.4-p12/bin/bash
/nix/store/pkc7g36m95jymw3ga2i7pwrykcfs78il-curl-7.57.0-bin/bin/curl http://httpbin.org/get \
  | /nix/store/znqn0z505i0bm1aiz2jaj1ki7z4ck1sv-jq-1.5/bin/jq --raw-output .origin

As we can see, all the binaries referenced in this script are absolute paths, something like /nix/store/...../bin/name. The /nix/store/... is the path of the derivation's (package's) build output.

Due to the pure and functional of Nix, that path will be the same on every machine that ever runs Nix. Replacing fuzzy references (eg. jq) with definitive and unambiguous ones (/nix/store/...) is a core tenant of Nix; as it means packages come will all their dependencies and don't pollute your environment.

Since it is an absolute path, that script doesn't rely on the PATH environment variable; so the script can be used anywhere.

When you reference the path (like ${pkgs.curl} from above), Nix automatically knows to download the package into the machine whenever your package is downloaded.

Why do we do it like this? Ultimately, the goal of package management is to make consuming software easier. Creating less dependencies on the environment that runs the package makes it easier to use the script.

So the TL;DR is:

# BAD; not very explicit
# - we need to remember to add curl to the environment again later
badPackage = pkgs.writeShellScriptBin "something" ''
  curl ...
'';

# GOOD: Nix will do the magic for us
goodPackage = pkgs.writeShellScriptBin "something" ''
  ${pkgs.curl}/bin/curl ...
'';

Functions make creating packages easier

One of the main lessons from this process is that when you use functions (like pkgs.writeShellScriptBin) to create packages, it is pretty simple. Compare this to a traditional RPM or DEB workflow; where you would have needed to write a long spec file, put the script in a separate file, and fight your way through too much boilerplate.

Luckily; NixPkgs (the standard library of packages) includes a whole raft of functions that make packaging easier for specific needs. Most of these are in the build support folder of the NixPkgs repository. These are defined in the Nix expression language; the same language you are learning to write. For example, the pkgs.writeShellScriptBin function is defined as a ~10 line function.

Some of the more complex build support functions are documented in the NixPkgs manual. There is currently documentation for packaging Python, Go, Haskell, Qt, Rust, Perl, Node and many other types of applications.

Some of the more simple build support functions (like pkgs.writeShellScriptBin) are not documented (when I write this). Most of them are self explanatory, and can be found by reading their names in the so called trivial builders file.

Up Next

Derivations 102 - Learning Nix pt 4

Follow the series on GitHub

Hero image from nix-artwork by Eric Sagnes

February 01, 2018 11:50 AM

July 30, 2016

Edit Fonts Activity

Welcome Page UX Concept

This is just an idea I had last night for improving the welcome screen UX, if it’s too much work or Dave and Yash don’t like it I understand. However, I may try to code it myself for fun if Yash doesn’t have time. :-)

My fear is that when users start the Edit Fonts activity for the first time they will be be lost and not understand what to do. Some users might not even have a basic understand of what vector drawing is or how a font is made. This welcome screen will at least give the users a basic idea about how to use the activity. Most importantly, this makes the first screen visualy interesting, interactive and fun. Many users may not continue with the activity if the first page is dull and boring.

I’m proposing that the welcome screen have 4 options, represented by icons and text, plus an editable .glyph that reads “Edit Fonts” in the Geo typeface. The Edit-Fonts logotype will be one .glyph file that is only loaded and never saved. see below:

UX concept 01

UX concept 02

I have added a Geo-Regular.ufo file to the gh-pages repo with a special “editfonts.glyph” logotype:

https://github.com/sugarlabs/edit-fonts-activity/tree/gh-pages/files/fonts/Geo-Regular.ufo

editfonts.glyph

There are two neat things about this approach. First, it uses components we already have, the only work will be laying out the page, which Dave or I can attempt if Yash is too busy. Second, if the user never realizes that the edit fonts logotype is editable, it still functions as a logotype. A similar UX design pattern was used for the start screen of the game Super Mario 64, see below:

Mario 64 easter-egg

by Eli Heuer at July 30, 2016 06:30 PM

July 12, 2016

Edit Fonts Activity

Continuous Integration With Travis and flake8

Last Saturday (July 9th) Eli and I met up to review the codebase, and the main issue I identified was that Travis was not set up with flake8 to test the codebase was conforming to the pep8 guidelines.

I’d filed Issue #17 for this, back at the start of the project on May 19. Yash had started to develop the [.travis.yml](https://github.com/sugarlabs/edit-fonts-activity/blob/gh-pages/.travis.yml) file to build a .xo bundle but hadn’t complete this just yet, so I commended out most of the code and what remained is very simple:

# this makes travis run a fast Docker container system
sudo: false
# we use python 2.7
language: python
python:
  - "2.7"
# we need to install flake8 to use it
before_install:
  - "pip install flake8"
# we check the codebase
script: 
  - "flake8 --statistics --ignore=E402 --exclude=defcon,extractor,fontTools,fontmake,robofab,ufo2ft,ufoLib,snippets ."

You can see there’s a few arguments passed that are pretty simple.

Stastics prints the number of occurences of each error, so you can fix the most common issues across the codebase first.

E402 is about the order of imports, but since we need to import gi to version later imports, we can’t adhere to that rule, so we ignore it.

We also exclude all the third party libraries, and our snippets.

Eli and I worked together on this and I finished it up on Sunday in Pull Request #65

Yash had already set up Travis configuration, at https://travis-ci.org/sugarlabs/edit-fonts-activity, so once this was merged, our button went green:

travis button is green

Finally I added a CONTRIBUTING.md file that explains how to use it.

I’ll get a similar travis set up for the gh-pages branch too.

Perhaps we could also set up a git hook that runs the flake8 command on each commit…

by Dave Crossland at July 12, 2016 06:30 PM