Several beginner questions - Gorenje - fsimx8mm

  • Hi,


    I am new to embedded Linux (but do have experiences as user of Linux on PC, RPI, Armbian, VPS, ...), so my questions will be very basic.

    I hope I can ask in this thread and get help.

    I currently have 1 development system PicoCore™MX8MM and according to Uboot has following:

    CPU: Freescale i.MX8MMQ rev1.0 1800 MHz (running at 1200 MHz)

    CPU: Commercial temperature grade (0C to 95C) at 57C

    Reset cause: POR

    Model: FSL i.MX8MM PCoreMX8MM board

    DRAM: 1 GiB

    PMIC: BD71847 Rev. 0xa0

    NAND: 512 MiB

    MMC: FSL_SDHC: 0, FSL_SDHC: 1

    Loading Environment from NAND... OK

    auto-detected panel NT35521_OLED

    Display: NT35521_OLED (720x1280)

    Video: 720x1280x24

    It appears to be an older version, as it's debug UART isn't where it's supposed to be in any documentation. I was promised to get new one, but am currently working with what I have.

    Our companies short term goal is to develop user interface for white goods appliance (Gorenje) with SOM/SOC.

    As we don't have such experience, I am learning how to do this.


    I am working with following documents FSiMX8MM_FirstSteps_eng.pdf, LinuxOnFSBoards_eng.pdf, FSiMX8_FirstSteps_eng.txt.

    I am using Fedora VM, which I loaded with 20201106 fsimx8mm-B2020.08-pre.tar.bz2.

    Following guides in mentioned documents, I managed to succesfully build complete system SW (Uboot, Kernel and Buildroot).

    Now I am trying to transfer it via TFTP & Uboot to the board.


    Meanwhile I have few questions (more will follow).

    1. Most important one and very confusing to me. I haven't yet grasped the process development for Embedded Linux. In MCU apps you develop code, compile, flash to MCU, test and then flash in serial production. I am not sure how it is done with Linux. So I have Linux Kernel, Buildroot (or Yocto,...) FS, FDT (which I will have to adapt to my peripherals later) and Uboot. I need to develop GUI application - let's say with Qt, custom FDT. How I then join everything together into 1 binary to use in serial production? Do I (simplified description) first upload System Software to devboard using Uboot and application via network, set up everything and then download all binaries via Uboot+TFTP and get image, which is to be used in serial production programming. Or how everything is integrated together? I haven't seen this described anywhere and it seems it isn't addressed in any of your workshops.


    2. Developing GUI. It seems that with F&S our only option is to use Qt. Since this option requires licensing I am asking, if there are any FOSS options, which can be used in commercial applications and still supported by F&S platform.


    3. Building FDT. Are there any guides how to build FDT with specific HW - for example I don't think my currently used display (NT35521_OLED) is supported in any of the device trees in 20201106 fsimx8mm-B2020.08-pre.tar.bz2. How to add more devices - like touchscreen, I2S sound sink... I assume Linux drivers are required for all such devices (unless some slow peripherals, with which we can work directly over (serial) data buses).


    4. Buildroot and Yocto. Which one to choose - what are the factors for the selection (as far as I know Yocto is newer).


    As you see these are beginner questions. Hopefully someone can help. Maybe via MS Teams meeting if not here.


    Gregor

  • Small update.

    I managed to load and save all 4 images (Uboot, Kernel, FDT and Buildroot) and after reset it seems Uboot is stuck reading from NAND:

    U-Boot SPL 2018.03 (Oct 22 2019 - 14:17:10 +0200)

    DDRINFO: start lpddr4 ddr init

    DRAM PHY training for 3000MTS

    check ddr4_pmu_train_imem code

    check ddr4_pmu_train_imem code pass

    check ddr4_pmu_train_dmem code

    check ddr4_pmu_train_dmem code pass

    Training PASS

    DRAM PHY training for 3000MTS

    check ddr4_pmu_train_imem code

    check ddr4_pmu_train_imem code pass

    check ddr4_pmu_train_dmem code

    check ddr4_pmu_train_dmem code pass

    Training PASS

    DRAM PHY training for 400MTS

    check ddr4_pmu_train_imem code

    check ddr4_pmu_train_imem code pass

    check ddr4_pmu_train_dmem code

    check ddr4_pmu_train_dmem code pass

    Training PASS

    DRAM PHY training for 100MTS

    check ddr4_pmu_train_imem code

    check ddr4_pmu_train_imem code pass

    check ddr4_pmu_train_dmem code

    check ddr4_pmu_train_dmem code pass

    Training PASS

    DDRINFO:ddrphy calibration done

    DDRINFO: ddrmix config done

    Normal Boot

    Trying to boot from NAND



    Can something be done? As I mentioned my devboard is older version - so this might be a problem here.

    I transferred these files from my build folder

    fsimx8mm-B2020.08-pre/test1/buildroot-2019.05.3-fsimx8mm-B2020.08/output/images/:

    Image picocoremx8mm.dtb rootfs.ubifs u-boot.nb


    g

  • Meanwhile I have few questions (more will follow).

    1. Most important one and very confusing to me. I haven't yet grasped the process development for Embedded Linux. In MCU apps you develop code, compile, flash to MCU, test and then flash in serial production. I am not sure how it is done with Linux. So I have Linux Kernel, Buildroot (or Yocto,...) FS, FDT (which I will have to adapt to my peripherals later) and Uboot. I need to develop GUI application - let's say with Qt, custom FDT. How I then join everything together into 1 binary to use in serial production? Do I (simplified description) first upload System Software to devboard using Uboot and application via network, set up everything and then download all binaries via Uboot+TFTP and get image, which is to be used in serial production programming. Or how everything is integrated together? I haven't seen this described anywhere and it seems it isn't addressed in any of your workshops.

    Buildroot and Yocto as configured in our example configurations only provide the runtime environment. So you need an additional step to build your own application software and add it to the system. There are different options to do this.


    1. Add your own application to Buildroot/Yocto as an additional package and include it in the root filesystem
      Here you add your own application as a package to Buildroot or Yocto and use their build process to also build your application. As such a package, the build process will automatically include your application in the root filesystem. Obviously this is only an option if Buildroot's or Yocto's build system is capable enough of building your application. And it restricts you to command line compilation. So it is not meant to be used together with an IDE.
    2. Compile your application separately, but still add it to the root filesystem
      Here you can compile your software in any way you like. Most importantly you can also use an IDE, where you can test and debug your application. The only thing you have to be aware is that you have to compile and link against the libraries of Buildroot/Yocto. This basically means you have to use the toolchain that is provided by these environments. Yocto builds its own toolchain and you have to refer to this toolchain in your IDE.
      Buildroot does not build an own toolchain but uses our provided toolchain instead. However it still creates some wrappers in output/host/bin that can (and should) be used instead of the globally installed toolchain because these wrappers will take care of all include and library paths so that they point to Buildroot's versions.
      Then when you are done with development, you simply copy your software to the root filesystem. In Buildroot, you can simply add a script file that is run at the end of the build process (in Buildroot's menuconfig) and use this script to copy all necessary files to the root filesystem in output/target. The resulting root filesystem image contains everything that is in this directory. In Yocto, you need a small recipe to do a similar thing.
    3. Compile and install separately
      In this version, you compile your software as in step 2. But instead of finally copying the software to the root filesystem, you create a separate image file for your application. Then you store this image separately on the board, e.g. in an own partition or as an own UBI volume. At runtime, this image is then mounted separately (e.g. via /etc/fstab) as an own directory or as an overlay on top of the root filesystem.

    Each method has its own pros and cons.

    2. Developing GUI. It seems that with F&S our only option is to use Qt. Since this option requires licensing I am asking, if there are any FOSS options, which can be used in commercial applications and still supported by F&S platform.

    There are more options available. The GUI can either use Wayland or render directly to the framebuffer. X11 was available for previous boards, but NXP does not have any X11 support with i.MX8 CPUs anymore. So Wayland is usually the way to go. But on top of this, you are free to use your environment. For example GTK+ is also possible, or Enlightenment or DirectFB.


    But even if you go with Qt, you are not necessarily required to buy a commercial license, as long as you restrict your software to Qt modules that are also released under the LGPL. This typically happens after a couple of years. If you do not need the newest 3D stuff and the newest internet protocols, this may still be sufficient for a pleasant application.

    3. Building FDT. Are there any guides how to build FDT with specific HW - for example I don't think my currently used display (NT35521_OLED) is supported in any of the device trees in 20201106 fsimx8mm-B2020.08-pre.tar.bz2. How to add more devices - like touchscreen, I2S sound sink... I assume Linux drivers are required for all such devices (unless some slow peripherals, with which we can work directly over (serial) data buses).

    The PicoCoreMX8MM is still rather new and we are still in some kind of transition so that the software will work as similar as possible to our previous boards. Normally you only have to set the display timings that you take from the data sheet and replace them in the device tree. That is not too difficult and we can assist you with this if required. In the future there will also be an option where you can configure the display in U-Boot by setting some environment variables, but we are still working on this.


    For devices that we have in our device tree but that you do not need, there is sometimes a #define that you can comment out and then the device is disabled. Or you simply comment out the appropriate device nodes. For any extra hardware on your carrier board, you have to add appropriate nodes to the device tree. The necessary properties are explained in Documentation/devicetree/bindings in the Linux source code tree. Again, we can assist you if you have any specific problems.

    4. Buildroot and Yocto. Which one to choose - what are the factors for the selection (as far as I know Yocto is newer).

    There is no older or newer, both systems are updated on a regular basis. Buildroot gets an update every three months, Yocto every six months. However we can not follow every release of the upstream packages, so our update steps are a little bit slower.


    From our point of view, Buildroot is much easier to use. You have a menu driven configuration system. If you ever have configured a Linux kernel, then you know this. It is the same system on Buildroot. So you see all available packages, you select all packages that you need and deselect all packages that you don't. Then save your configuration and finally say "make". Buildroot now fetches all source packages from all over the world, applies some patches to support cross compilation and then builds all the packages. Finally it generates an image with all these components. This is the root filesystem that you download to you board. If you need an additional package or want to remove a package, go to menuconfig, tick or untick the package, call "make" again and you have the new image (well, there are some constraints here and there, but you get the idea). Most importantly you do not need to know anything about the build process at all. Just call make. For all parts where you are supposed to change something, there are appropriate hooks that you can use easily. Even if you want to add your own package, you simply read the Buildroot documentation which needs about 4 hours. Then you know how to do this.


    Yocto is different. Of course the basic idea is similar. You define what packages you need and Yocto will download and build them all and creates a root filesystem image at the end. However Yocto has its own build system called bitbake. This is driven by so-called recipes, small script files, that tell bitbake what to download and how to build. Unfortunately these recipes are used for everything. So even adding or removing a package needs some modification in one or more of these recipes. You do not even get an overview of what packages are currently part of your final image. You can only list all the recipes that are involved. So modifying anything in Yocto requires a) knowledge of the recipe script language and b) a full understanding of the build process, so that you know where to modify a variable or where to add a small script function. Our own experience has shown that you need at least two weeks of training until you are capable of doing anything other than building some sample images.


    Our goal is always to make it as easy as possible for our customers to work with our boards. This is why our primary system is Buildroot. New releases from our software are available for Buildroot first and for Yocto later.


    There are two things that may force you to use Yocto. First, support for the big web browsers, i.e. Chromium or Firefox, is only available in Yocto, not in Buildroot. There is a small HTML5 browser called Midori, but of course with quite less features. And second, Yocto is doing one thing better than Buildroot. Before building the root filesystem image, it creates a repository of all the binary packages that it has compiled. The image is then actually built from this repository. This means that if you need several different versions of your image, for example for different variants of your device, you simply define a big Yocto configuration that contains all the packages that may ever be needed by any variant, and then you simply pull in different subsets of these packages in your different filesystem images. And theoretically you can even use this package repository on a server so that your boards can download additional packages at runtime in the field, like on any desktop Linux distribution.


    So the decision between Yocto and Buildroot would go like this:

    • Do I need any of the packages that are only provided by Yocto? Then use Yocto.
    • Do I need any of the unique features of Yocto like the package repository? Then use Yocto.
    • Otherwise use Buildroot.

    Your F&S Support Team

    F&S Elektronik Systeme GmbH
    As this is an international forum, please try to post in English.
    Da dies ein internationales Forum ist, bitten wir darum, Beiträge möglichst in Englisch zu verfassen.

  • Hello guser,


    I investigated your problem and indeed, it seems that your old board version and the current release do not fit.

    Also, on i.MX8 the Uboot gets loaded by the SPL (Secondary Boot Loader), which does not get replaced if you replace only the Uboot.nb0.

    Replacing the SPL is quite complicated at the moment, especially on an old board revision like yours, but can be done if you really need to.

    Your new PicCoreMX8MM is on its way to you, so I would recommend working with the new revision, but if you are on a tight schedule I can assist you in replacing the SPL.


    Your F&S Support Team

  • Thank you for extensive feedback on my basic questions. I'm sure I'll have some more, however I'm now studying some more documentation (including Buildroot's).

    Meanwhile, if it's not too much trouble, I'd appreciate your help in solving bootloader issue.

    I suspect it will take some time until new DevKit comes (specially as it will probably be equipped with different display by Datamodul), so meanwhile I'd need something usable to test, what I learn .

  • ...

    There are more options available. The GUI can either use Wayland or render directly to the framebuffer. X11 was available for previous boards, but NXP does not have any X11 support with i.MX8 CPUs anymore. So Wayland is usually the way to go. But on top of this, you are free to use your environment. For example GTK+ is also possible, or Enlightenment or DirectFB.


    But even if you go with Qt, you are not necessarily required to buy a commercial license, as long as you restrict your software to Qt modules that are also released under the LGPL. This typically happens after a couple of years. If you do not need the newest 3D stuff and the newest internet protocols, this may still be sufficient for a pleasant application.

    ...

    I have more question regarding QT licensing - see above comment. Are you sure it is correct regarding QT?

    I assume we'd need Qt for Device Creation package to develop GUI for fsimx8mm.

    This package is available only under commercial license according to QT Licensing .

  • No, you can create a regular Qt application like on any desktop. You do not need "Qt for Device Creation". Each module has its own list of licenses. If you are in Buildroot and check all the packages in package/qt5 (they are listed in the makefiles with .mk extension), then you will see that each has a different set. And there are even differences between the versions. For example:


    If you use Qt-5.6, then you have the following licenses for the following modules (I only list a few):


    qt5wayland: GPL-3.0 or LGPL-2.1 with exception or LGLP-3.0, GFDL-1.3 (docs)

    qt5charts: -

    qt5quickcontrols: GPL-2.0 or GPL-3.0 or LGPL-3.0, GFDL-1.3 (docs)

    qt5quickcontrols2: GPL-3.0 or LGPL-3.0, GFDL-1.3 (docs)


    This means if you only need qt5wayland, you can use LGPL-2.1 or LGPL-3.0. If you also need qt5quickcontrols and/or qt5quickcontrols2, then you can only use LGPL-3.0. But you can not use qt5charts, not even with a full OpenSource project, because it is only available with the commercial license.


    If you use Qt-5.12, then the set of licenses is slightly different:


    qt5wayland: GPL-2.0+ or LGPL-3.0, GPL-3.0 with exception(tools) , GFDL-1.3 (docs)

    qt5charts: GPL-3.0

    qt5quickcontrols: GPL-2.0 or GPL-3.0 or LGPL-3.0, GFDL-1.3 (docs)

    qt5quickcontrols2: GPL-3.0 or LGPL-3.0, GFDL-1.3 (docs)


    This means the LGPL-2.1 was dropped from qt5wayland, you have to use LGPL-3.0 now, even if you do not need qt5quickcontrols and qt5quickcontrols2. You still can't use qt5charts under this license, but you can use qt5charts at least in a full OpenSource project, because it is available under GPL-3.0 now.


    So it is very important to know what Qt5 modules you will actually need. And you cannot use some modules if you want to stay with LGPL.


    This licensing system may also force you to stay on an older Qt version. For example if you began your software with LGPL-2.0 and Qt-5.6 and you used qt5wayland, then you can not update easily to Qt-5.12, because there you would be forced to use LGPL-3.0, which may contradict the license obligations of other (non-Qt) modules in your software. Sou you would have to stay on Qt-5.6.


    Please note that Qt has changed the licensing conditions a few months ago with regard to LTS versions (e.g. Qt-5.15) and Qt-6. If you want to use the commercial license, this can only be done by a subscription now. So you need to pay regularly and most important the license will get void if you stop paying. So you have to renew your subscription every year as long as you want to sell your devices. This is quite a risk as you do not know how the subscription fees will develop in the future. In our point of view this is a big drawback for Qt now.


    As you see, the licensing is quite complex. And if you'll ask people from Qt, they will always direct you to the commercial license, of course because they want to earn money. This is perfectly OK if you can live with the license fee. And of course it is easier, you do not have to care about what you may use or not. But if you want to stay free of costs, you have to be very careful about what you actually use.


    To summarize:


    - If you want to be OpenSource, there is no big problem. Most packages are available under the GPL. Exceptions may exist, like qt5chart above.

    - If you want to be ClosedSource and stay free of costs under LGPL, you are restricted to those modules available under LGPL.

    - If you want to be ClosedSource and use all modules, you need the commercial license.


    Your F&S Support Team

    F&S Elektronik Systeme GmbH
    As this is an international forum, please try to post in English.
    Da dies ein internationales Forum ist, bitten wir darum, Beiträge möglichst in Englisch zu verfassen.

    Edited once, last by fs-support_HK ().

  • Thank you for the explanation.

    Is licensing something, that your customers are usually handling themselves, or do you offer support services for that or are there other legal companies that are taking care of licensing for commercial embedded Linux products?

  • I would like to test video rendering on my current devboard (and maybe on above mentioned single core version) using GPU.


    Using gstreamer I was unable to play any video, as I always get error messages about codecs. For the moment I have h264 sample video (I also tried different codecs in the past) in target board and gstreamer can’t find appropriate codecs, even while h264 plugin is installed:

    gst-launch-1.0 playbin uri=file:/root/ava.avi

    Setting pipeline to PAUSED ...

    Pipeline is PREROLLING ...

    Missing element: MPEG-1 Layer 3 (MP3) decoder

    WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'audio/mpeg, mpegversion=(int)1, mpegaudioversion=(int)1, layer=(int)3, rate=(int)48000, channels=(int)2, parsed=(boolean)true'.

    Additional debug info:

    gsturidecodebin.c(921): unknown_type_cb (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0

    Missing element: ITU H.264 (Main Profile) decoder

    WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'video/x-h264, variant=(string)itu, framerate=(fraction)25/1, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)81/256, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)main, level=(string)4.1, codec_data=(buffer)014d4029ffe1001b674d4029e980a00b77fe00a2020020000003002000000641e3062701000468efbc80'.

    Additional debug info:

    gsturidecodebin.c(921): unknown_type_cb (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0

    ERROR: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: Your GStreamer installation is missing a plug-in.

    Additional debug info:

    gsturidecodebin.c(988): no_more_pads_full (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0:

    no suitable plugins found:

    gstdecodebin2.c(4643): gst_decode_bin_expose (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0:

    no suitable plugins found:

    Missing decoder: MPEG-1 Layer 3 (MP3) (audio/mpeg, mpegversion=(int)1, mpegaudioversion=(int)1, layer=(int)3, rate=(int)48000, channels=(int)2, parsed=(boolean)true)

    Missing decoder: ITU H.264 (Main Profile) (video/x-h264, variant=(string)itu, framerate=(fraction)25/1, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)81/256, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, profile=(string)main, level=(string)4.1, codec_data=(buffer)014d4029ffe1001b674d4029e980a00b77fe00a2020020000003002000000641e3062701000468efbc80)



    So for now I use vlc, which can’t find OpenGL API:

    egl_wl gl error: cannot select OpenGL API

    So I believe it renders video using CPU. On quad core I get approx 17% CPU usage from VLC, so it would probably work even on single core CPU – maybe even on NANO.

    Can you help with vlc rendering on GPU with Open GL?


    I also noticed, that if I use gstreamer with sink waylandsink, all I get is green square.

    If I use fbdevsink, it renders contents properly, despite weston being loaded.

    Why is that, if weston is running?