Date   

Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Rob Landley
 

On Monday 28 December 2009 04:27:04 Andy Green wrote:
I wasn't suggesting you don't have firsthand experience all over the
place, eg, busybox, I know it. What I do suggest in the principles I
have been bothering your Christmas with here can really pay off and are
in the opposite direction to Qemu, bloated bootloaders, and buildroot
style distro production.
In the PDF I spent a dozen pages pointing out that I think buildroot is silly
and how it diverged from its original goals. Why do you bring it up here?

Building natively is nice. I think cross compiling extensive amounts of stuff
is generally counterproductive, and you don't seem to disagree with that.
(However, if there weren't cases where cross compiling _does_ make sense I
wouldn't bother building the second stage cross compilers in my project and
packaging them up for other people to use. "I don't recommend it" != "people
who do this are stupid".)

You're conflating a bunch of issues. You can boot a full distro under qemu or
under real hardware, so "use a native distro" is orthogonal to "develop under
emulation vs develop on target hardware". And building a hand-rolled system
or assembling prebuilt distro bits can both be done natively (under either
kind of "native"), so that's orthogonal too.

I'm really not understanding what argument you're making. "The one specific
configuration I've chosen to use is better than any other possible
configuration"... Possibly there's a "for your needs" attached somewhere that
you're missing? You seem to be confusing "that's not an interesting option
for me" with "that's not an interesting option". You're conflating a bunch of
different issues into a One True Way, which is sad.

Fedora provides a whole solution there, with the restriction it's
designed for native build, not cross.
QEMU: it's not just for breakfast anymore.
That's right Qemu often requires lunch, teatime and supper too to build
anything :-)
Which is why you hook it up to distcc so it can call out to the cross
compiler, which speeds up the build and lets you take advantage of SMP.
(Pages 217-226 of the above PDF.)
Or you just do it native and you don't care about extreme rebuild case
because you're using a distro that built it already.
I find reproducibility to be kind of nice. (Remember when ubuntu dropped
Powerpc support? Or how Red hat Enterprise is finally dropping itanic? Even
today can you build a full distro natively on new hardware like microblaze or
blackfin?)

I also don't see any difference between your "let the distro handle everything"
with "the vendor supplies a BSP, why would you need anything else?" That's
certainly a point of view, and for you it may be fine. Anything can be
outsourced. But "I don't have to do it, therefore nobody does" is a
questionable position.

There's more things you can do to speed it up if you want to go down that
rabbit hole (which the presentation does), and there's more work being
done in qemu. (TCG was originally a performance hit but has improved
since, although it varies widely by platform and is its own rabbit hole.
Also switching to gigabit NIC emulation with jumbo frames helped distcc a
lot.)
Just saying people don't do distribution rebuilds for their desktop or
server boxes unless they're Gentoo believers.
Linux From Scratch does not exist? My friend Mark (who inherited a "Slackware
for PowerPC" cross-build system) is in reality unemployed? Nobody ever
actually works _on_ Fedora and needs to do a development build?

It's interesting to see someone who can imply these things with a straight
face on an embedded development list.

So the need to consider
this heavy duty stuff only exists at the distro. In Fedora case, guys
at Marvell are running it and have access to lots of fast physical
hardware they hooked up to Koji (you touch on this in your PDF). I
don't think it's like they were waiting to hear about Qemu now they're
gonna drop that and move to emulation.
You also seem to be unaware that QEMU was only started in 2003 (first commit to
the repository, Feb 18, 2003), and didn't even work on x86 all that well until
the end of 2005. Decent arm support started showing up in 2006 but it was
darn wonky, and the switch from dyngen to TCG didn't happen until Feb 1, 2008,
before which you couldn't even build qemu with gcc 4.x:

http://landley.net/qemu/2008-01-29.html#Feb_1,_2008_-_TCG

Therefore, they didn't "drop" support, QEMU wasn't ready for prime time back
in 2006 when my then-boss (Manas Saksena) quit Timesys to go launch Fedora for
Arm. (The boss I had after that, David Mandala, is now in charge of Ubuntu
Mobile. Timesys had some darn impressive alumni, pity the company screwed up
so badly none of us stayed around. I myself stayed through four bosses, three
CEOs, and outlasted 80% of my fellow engineers...)

By the way:

http://fedoraproject.org/wiki/Architectures/ARM/HowToQemu

You continue to conflate orthogonal issues.

There is a REASON QEMU project hasn't shipped its 1.0 release yet. Dismissing
it as irrelevant to the future based on your experiences with it years ago is
kind of hilarious, really. (And dismissing emulation in general as of
interest to nobody... Sigh.)

They're in the business of
making fast ARM chips and they're going to outpace Qemu.
They're not in the same business. They don't compete. You don't deploy on
qemu, and when you go down to Fry's to get a laptop the x86-64 is likely to
continue to be the main option for the forseeable future.

But in general, Moore's Law says that qemu on current PC hardware is
about the speed of current PC hardware seven years ago. (And obviously
nobody ever built anything before 2003. :)
The speed surely depends on the architecture being emulated. I did try
Qemu on ARM on a nice desktop box here and it seemed unworkable to me.
You couldn't make it work, therefore it can't be made to work?

I am perfectly happy to trust Fedora to take care of disribution
rebuilds for me.
Good for you.

Did it ever occur to you that there are people out there who do the bits you
don't personally do? That all these packages don't emerge from zeus's
forehead on a quarterly basis? That someone, somewhere, is actually
interested in how this stuff _does_ work?

Newer ARM platforms like Cortex8+ and the Marvell Sheevaplug will
outstrip emulated performance on a normal PC. There are 2GHz multi-core
ARMs coming as well apparently. So I took the view I should ignore Qemu
and get an early start on the true native build that will be the future
of "native build" as opposed to cross due to that.
Pages 24-34 of the above PDF go over this. The first two pages are on
the advantages of native compiling on real hardware, the next eight pages
are on the disadvantages. It can certainly be made to work, especially
in a large corporation willing to spend a lot of money on hardware as a
_prerequisite_ to choosing a deployment platform.
This is what I have been calling "buildroot thinking" again. What do
you think you will be rebuilding exactly out of Fedora when you put it
on an ARM11+ system? Typically, it's going to be zero packages, nothing
at all. What you will need to build typically is this:
"All the world's a Cortex A8, and always will be."

Look, go have fun. I'm sure what you've chosen to do is going to work just
great for you, and you'll never need to care about anything else. Personally,
I no interest in speaking to you ever again about anything, and I have a spam
filter for situations like this.

I'll even make you happy: "You win!" There you go.

*plonk*

Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/28/09 19:57, Somebody in the thread at some point said:

Hi Peter -

Thanks for an interesting discussion and all your free buildroot
advertising ;)

(buildroot maintainer)
Hey you are welcome... in the situations where you can't play the distro game then buildroot is the lifesaver. Same as U-Boot I have experienced in detail the amount of work involved in what buildroot delivers and it has to be respected for what it does.

I'm really saying there's a region where buildroot is the right answer and a threshold beyond which a "real distro" is the right answer, we should not just keep on doing what we have being doing.

(But you want to be careful you don't grow into OpenEmbedded's build system. For me it failed partway through a 1000+ package build on trying to build *host* dbus libs for me (yes, host dbus).

I just wanted to package "hello world" on Openmoko. They did fix it in the end so it would build host dbus OK to be fair. But I felt that was a very long way from the point.)

Andy> This thread was meant to be about merits of Qi, it's kinda gone
Andy> off into embedded with distro rootfs because the philosophy is
Andy> related. In both cases burden on the developer is intended to be
Andy> removed and effort simplified to get the job done more reliably
Andy> and quicker.

Sure, if your embedded device is very PC like (size/functionality) that
makes sense.
Thanks. As written the threshold seems to be at ARM11+ and SD particularly making it into enough of a "PC" you don't have to worry about bash vs ash or where a few 100MB of storage is going to come from.

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Peter Korsgaard
 

"Andy" == Andy Green <andy@...> writes:
Hi,

Andy> This is what I have been calling "buildroot thinking" again.

Thanks for an interesting discussion and all your free buildroot
advertising ;)

(buildroot maintainer)

Andy> This thread was meant to be about merits of Qi, it's kinda gone
Andy> off into embedded with distro rootfs because the philosophy is
Andy> related. In both cases burden on the developer is intended to be
Andy> removed and effort simplified to get the job done more reliably
Andy> and quicker.

Sure, if your embedded device is very PC like (size/functionality) that
makes sense.

--
Bye, Peter Korsgaard


Re: How to store kernel panic/oops

Marco Stornelli <marco.stornelli@...>
 

David Woodhouse wrote:

Can't it be done with what's in the tree already? Just create an MTD
device using phram or something else, then point mtdoops at it
Yes of course, if possible we shouldn't reinvent the wheel but I
wondered if there was something more specific. To add mtdoops (more or
less 1k) we have to add mtd subsys (more or less 14k) to the kernel to
achieve this and it's all overhead.

Marco


Re: CELF Project Proposal - Feasibility analisys of Android introduction in a completely tested industrial device

Benjamin Zores <ben@...>
 

Raffaele Recalcati a écrit :

Looking at
http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git&a=search&h=HEAD&st=commit&s=android
my idea seems to be ... not very good at all !!
What do you think?
I'm not a kernel mainline developer so I don't understand the real meaning.
Basically, Android is a so hacked-up version of Linux it is hard to maintain. Google has done some drivers and refused to maintain them or
make them follow upstream kernel so devs get frustrated, which is legitimate imho.

Also, they have modified so many things in Linux (kernel + legacy runtime environment), I'm surprised this OS received such a fame.

Ben


Re: CELF Project Proposal - Feasibility analisys of Android introduction in a completely tested industrial device

recalcati
 

Raffaele Recalcati wrote:
2009/12/18 Raffaele Recalcati <lamiaposta71@...>:
Summary: Feasibility analisys of Android introduction in a completely
tested industrial device.

Description: By now Android has been ported to 600Mhz Cortex A8 cpu or similar.
The declared Android requirements are instead lower, about 200Mhz Arm9
cpu with 100Mhz Ram bus.
So I think the growing interest in this O.S. lacks some porting to
less powerful cpus.
The reasons to do this porting are commercial because of Google market
power, but are also technnical, because Android debugging environment
is very nice for not embedded developers.
This could help the diffusion of opensource embedded Linux.
This is interesting. Can you let me know if the focus of this work
is to experiment with the lower bounds of Android scalability, or
whether the focus is on Android use in industrial devices?

If the latter, than it would be good to hear more about what might
be needed to extend (or reduce :-) ) Android to fit this market.

I'll add a proposal for this, but I'd like to hear more to clarify
the proposal.

Thanks,
-- Tim

=============================
Tim Bird
Architecture Group Chair, CE Linux Forum
Senior Staff Engineer, Sony Corporation of America
=============================

Looking at

http://git.kernel.org/?p=linux%2Fkernel%2Fgit%2Ftorvalds%2Flinux-2.6.git&a=search&h=HEAD&st=commit&s=android

my idea seems to be ... not very good at all !!

What do you think?
I'm not a kernel mainline developer so I don't understand the real meaning.
Below there is my new proposal, but before accepting it please hear
this new idea, that could be better and possible to be used also in
2.6.33 kernel:
"Adding to Maemo or Debian or Gentoo embedded systems a debugging
environment similar to Android: I mean the possibility to connect to a
single debugging "server" and, in a graphical very easy (Eclipse or QT
Creator) way, connecting the gdb to the desired process, or, see
rootfs, or tracing cpu usage, ..."
What I really like of Android is the fantastique environment, with the
emulator that can do step by step debugging....


---------------------------------------------------------
Summary: Feasibility analisys of Android introduction in a completely
tested industrial device.

Description: Android is a new important graphic interface and a java
virtual machine optimized for
embedded devices.
The Android breaking feature in respect of traditional GNU Linux
embedded operating systems has
discouraged the possibility to enhance an already existing GNU Linux
embedded industrial device
with at least the Android's graphic interface.

By now Android has been fully ported to Cortex A8 cpus starting new
mobile phone projects from
scratch.
The reasons to do this work are commercial because of Google market
power, but are also technical,
because Android debugging environment is very nice for not embedded developers.
This could help the diffusion of opensource embedded GNU Linux in a
wider point of view.

The work should be maybe against the 2.6.31 kernel.
The idea is to preserve industrial tested development but adding
Android graphical interface and
debugging features.
The cpu dependency of the porting will be as small as possible.
The most important kernel interfaces will be investigated, at least
framebuffer for tft lcd,
touchscreen, audio ac97 interface, usb host for usb pen.

Related work:
* Android Porting - http://www.kandroid.org/<wbr></wbr>android_pdk/
* Android Pxa270 - http://android-pxa270.sourceforge.net/

Scope:
This should take more than 1 month for feasibility analysis.
------------------------------------------------------

--
www.opensurf.it


Re: How to store kernel pranic/oops

David Woodhouse <dwmw2@...>
 

On Mon, 2009-12-28 at 12:43 +0100, Marco Stornelli wrote:
It would be nice to have a "ramoops" to save in a circular buffer in a
persistent ram this kind of information. Any comments? Is there already
anything similar out-of-tree?
Can't it be done with what's in the tree already? Just create an MTD
device using phram or something else, then point mtdoops at it.

--
David Woodhouse Open Source Technology Centre
David.Woodhouse@... Intel Corporation


How to store kernel pranic/oops

Marco Stornelli <marco.stornelli@...>
 

Hi,

I know the open project proposal 2010 is closed, but it's just to start
a discussion. It would be nice to save oops/panic automatically in a
structure/file in ram. At the moment there are two way to save
information: mtdoops (save the information in flash), with kdump/kexec
(we can extract the dmesg from vmcore file). With these tools there are
some drawbacks:

1) There are embedded systems without a flash where to save the information;
2) we could consider this kind of logs too volatile for a flash, I mean
there's no reason to store it for a long time, it's important to recover
and read them as soon as possible, at next boot for example.
3) kdump requires a lot of ram and resources for embedded systems.
4) kexec is available only for some archs.

It would be nice to have a "ramoops" to save in a circular buffer in a
persistent ram this kind of information. Any comments? Is there already
anything similar out-of-tree?

Marco


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/28/09 00:21, Somebody in the thread at some point said:

Hi -

I started programming on a commodore 64. By modern standards, that system is
so far down into "embedded" territory it's barely a computer. And yet people
did development on it.
My dear Rob I got started on a PET, I can understand your POV :-)

kicked into the server space by the iPhone and such. I want to follow Moore's
Law down into disruptive technology territory and find _out_ what it does.
The big challenge I see is delivering highly complex Linux devices with insufficient developers in a way that won't disappear up its own ass and kill the project / customer with delays or failure to perform.

It's nice if the device is efficient with every cycle, but that is a geek preoccupation. Many customers in suits will tell to spend an extra $1 to overcome it by hardware and gain back $5 from accelerated time to market, so long as they can depend on the high quality of the software basis to not kill that logic by delays.

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/27/09 23:15, Somebody in the thread at some point said:

Hi Rob -

I've also spent the last few years developing a project that produces native
built environments for various QEMU targets and documents how to bootstrap
various distros under them:

http://impactlinux.com/fwl

So I do have some firsthand experience here.
I wasn't suggesting you don't have firsthand experience all over the place, eg, busybox, I know it. What I do suggest in the principles I have been bothering your Christmas with here can really pay off and are in the opposite direction to Qemu, bloated bootloaders, and buildroot style distro production.

Fedora provides a whole solution there, with the restriction it's
designed for native build, not cross.
QEMU: it's not just for breakfast anymore.
That's right Qemu often requires lunch, teatime and supper too to build
anything :-)
Which is why you hook it up to distcc so it can call out to the cross
compiler, which speeds up the build and lets you take advantage of SMP.
(Pages 217-226 of the above PDF.)
Or you just do it native and you don't care about extreme rebuild case because you're using a distro that built it already.

There's a critical advantage to build in specifically the execution environment you do cover in three words on slide 14 of your PDF I expand on at the end of the mail.

That's also why my FWL project uses a statically linked version of busybox,
because the static linking avoids the extra page retranslations on each exec
and thus sped up the ./configure stage by 20%. (Pages 235-236 of PDF.)

There's more things you can do to speed it up if you want to go down that
rabbit hole (which the presentation does), and there's more work being done in
qemu. (TCG was originally a performance hit but has improved since, although
it varies widely by platform and is its own rabbit hole. Also switching to
gigabit NIC emulation with jumbo frames helped distcc a lot.)
Just saying people don't do distribution rebuilds for their desktop or server boxes unless they're Gentoo believers. So the need to consider this heavy duty stuff only exists at the distro. In Fedora case, guys at Marvell are running it and have access to lots of fast physical hardware they hooked up to Koji (you touch on this in your PDF). I don't think it's like they were waiting to hear about Qemu now they're gonna drop that and move to emulation. They're in the business of making fast ARM chips and they're going to outpace Qemu.

But in general, Moore's Law says that qemu on current PC hardware is about the
speed of current PC hardware seven years ago. (And obviously nobody ever
built anything before 2003. :)
The speed surely depends on the architecture being emulated. I did try Qemu on ARM on a nice desktop box here and it seemed unworkable to me. I am perfectly happy to trust Fedora to take care of disribution rebuilds for me.

Newer ARM platforms like Cortex8+ and the Marvell Sheevaplug will
outstrip emulated performance on a normal PC. There are 2GHz multi-core
ARMs coming as well apparently. So I took the view I should ignore Qemu
and get an early start on the true native build that will be the future
of "native build" as opposed to cross due to that.
Pages 24-34 of the above PDF go over this. The first two pages are on the
advantages of native compiling on real hardware, the next eight pages are on
the disadvantages. It can certainly be made to work, especially in a large
corporation willing to spend a lot of money on hardware as a _prerequisite_ to
choosing a deployment platform.
This is what I have been calling "buildroot thinking" again. What do you think you will be rebuilding exactly out of Fedora when you put it on an ARM11+ system? Typically, it's going to be zero packages, nothing at all. What you will need to build typically is this:

- bootloader (using Fedora's cross compiler package)
- kernel (using Fedora's cross compiler package)
- your own apps and libs (native build on device against libs there)

In addition you will need to take evasive action around the boot flow, but to get started that's just init=/bin/my-init.sh

For hobbyists, small businesses, and open source developers in general, there
are significant advantages to emulation. (Page 208 comes to mind.) And if you
_are_ going to throw money at hardware, x86-64 continues to have better
price/performance ratio, which was always its thing.
The point is you will definitely be throwing lots of money at hardware in this game, your ARM device platform. Since all products start with zero users, and we get to be the first, we had best spend all our time gaining experience with the hardware we plan to ship.

If there are problems with upstream unless it's something really in the plumbing like cache behaviour which is a kernel issue, it can normally be shown in the code easy enough.

Pages 68-71. If your definition of embedded development is using off the shelf
hardware and installing prebuilt binary packages into it, life becomes a lot
easier, sure.
Well that's exactly my point, except the shelf the hardware came off is the one with your own prototype ARM devices. No Qemu, no mass rebuilds, no funky "it's embedded so we have to whip ourselves with nettles".

If life's a lot easier blowing away buildroot style and the infrastructure for that, why is anyone considering doing it the hard way on devices that can handle the faster, more reliable and well understood packaged distro rootfs basis? It really does become like working on a desktop Linux box with Fedora.

For a lot of cases that's a few small
app packages that are mainly linking against stuff from the distro and
they're not too bad to do natively.
Pages 78-84
...

I have been actually living these principles the past couple of years, consider this a report from a specific frontline rather than a review of possibilities.

Package control or source control? (Different page ranges...)
Package control is also source control, that is a whole other advantage of using a packaged distro, license compliance becomes real easy. A Source RPM is generated with the binary ones, you just need to get the SRPMs of the packageset you are shipping (which you can get a list of with rpm -qa) and publish it for GPL compliance. You can also grep the packages for licence and automate other tasks that way (such as excluding source).


Anyway here's the reason promised earlier that building out of the execution environment will bite you, you need to keep the "-devel" stuff in sync: includes and libraries. You can easily install a new version of something with a -devel on your actual device and still be building against the old version of it on this Qemu or Cross build environment.

Building on the device you are running on, and using packaged builds for your own apps removes this issue completely.


This thread was meant to be about merits of Qi, it's kinda gone off into embedded with distro rootfs because the philosophy is related. In both cases burden on the developer is intended to be removed and effort simplified to get the job done more reliably and quicker.

My experience has been this kind of talk makes people feel jumpy when they're invested in the other philosophies. I hope what has been discussed has left people at least wondering if there might be value in disallowing the bootloader from doing anything but get you quickly into Linux, so the development and support effort can focus just in Linux and the bootloader was just something you spent a day tuning months ago and never touched again (especially not in the factory).

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Rob Landley
 

On Sunday 27 December 2009 04:09:23 Andy Green wrote:
I agree it's nice to have a build environment compatible with your
deployment environment, and distros certainly have their advantages, but
you may not want to actually _deploy_ 48 megabytes of /var/lib/apt from
Ubuntu in an embedded device.
I did say in the thread you want ARM11+ basis and you need 100-200MBytes
rootfs space to get the advantages of the distro basis. If you have
something weaker (even ARM9 since stock Fedora is ARMv5+ instruction set
by default) then you have to do things the old way and recook everything
yourself one way or another.
I started programming on a commodore 64. By modern standards, that system is
so far down into "embedded" territory it's barely a computer. And yet people
did development on it.

http://landley.net/history/catfire/wheniwasaboy.mp3

That said, you can follow Moore's Law in two directions: either it makes stuff
twice as powerful every 18 months or it makes the same amount of power half
the price.

What really interests me is disposable computing. Once upon a time swiss
watches were these amazingly valuable things (which Rolex and friends try to
cling to even today by gold-plating the suckers), but these days you can get a
cheap little LCD clock as a happy meal toy. The cheapest crappiest machines
capable of running Linux are 32-bit boxes with 4 gigs of ram, which were high-
end server systems circa 1987 that cost maybe about $5k (adjusted for inflation
anyway). These days, the cheapest low-end Linux boxes (of the "repurposed
router" variety) are what, about $35 new? Moore's Law says that 21 years is
14 doublings, which would be 1) $2500, 2) $1250, 3) $635, 4) $312, 5) $156, 6)
$87, 7) $39, 8) $19, 9) $9.76, 10) $4.88, 11) $2.44, 12) $1.22, 13) $0.61, 14)
$0.31.

So in 2009 that $5000 worth of computing power should actually cost about 30
cents, and should _be_ disposable. In reality, the CPU in that router is
clocked 20 times faster than a Compaq deskpro 386, you get 4 to 8 times the
memory, they added networking hardware, and so on. And there are fixed costs
for a case and power supply that don't care about Moore's Law, plus up-front
costs to any design that need to be amortized over a large production run to
become cheap, and so on.

And the real outstanding research problems include ergonomic UI issues for
tiny portable devices, batteries wearing out after too many cycles, and the
fact that making "disposable" devices out of materials like cadmium is dubious
from environmental standpoint. Oh, and of course there was the decade or two
companies like Intel lost going up a blind alley by bolting giant heat sinks
and fans onto abominations like the Pentium 4 and Itanic. They didn't care
about power consumption at all until fairly recently, and are still backing
out of that cul-de-sac even today...

Still, I forsee a day when cereal boxes have a display on the front that
changes every 30 seconds to attract passerby, driven by the same amount of
circuitry and battery that makes the "free toy inside" blink an LED today. (I
don't know what else that sort of thing will be used for, any more than people
predicted checking twitter from the iPhone,)

This I'm reluctant to abandond the low-end and say "oh we have more power now,
only machines with X and Y are interesting". The mainframe, minicomputer, and
micro (PC) guys each said that, and today the old PC form factor's getting
kicked into the server space by the iPhone and such. I want to follow Moore's
Law down into disruptive technology territory and find _out_ what it does.

Even now there are plenty of suitable platforms that will work with it,
and over time they will only increase.
You must be this tall to ride the computer.

Nothing seems to totally die out
(8051-based micros are still in the market)
Mainframes are still on the market too.

but each time something new
comes in at the top it grabs some of the market and the older ones shrink.

It boils down to the point that if you just treat the ARM11+ platforms
like the previous generation and stick fat bootloaders and buildroot
blobs on them, you are going to miss out on an epochal simplification
where embedded Linux largely becomes like desktop Linux in workflow,
quality and reliability of update mechanisms, and effort needed to bring
up a box / device.
New computing niches will develop new usage patterns. The iPhone is currently
doing this, and is unlikely to be the last cycle.

They'll also grow more powerful and expand into old niches the way "blade
servers" are constructed from laptop components and used for batch processing
today, but I personally find that less interesting.

-Andy
Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Rob Landley
 

On Sunday 27 December 2009 03:54:51 Andy Green wrote:
On 12/27/09 07:17, Somebody in the thread at some point said:

Hi Rob -
Before replying, I note that Mark Miller and I gave a presentation entitled
"Developing for non-x86 targets using QEMU" Ohio LinuxFest in October.

http://impactlinux.com/fwl/downloads/presentation.pdf
http://impactlinux.com/fwl/presentation.html

There's even horribly mangled audio of our rushed 1 hour presentation. (The
slides are for a day-long course and we had 55 minutes to give a talk, so we
skimmed like mad. Unfortunately, the netbook they used for audio recording
had the mother of all latency spikes whenever the cache flush did an actual
flash erase and write, so there are regular audio dropouts the whole way
through.) Still, it's somewhere under:

http://www.archive.org/details/OhioLinuxfest2009

I've also spent the last few years developing a project that produces native
built environments for various QEMU targets and documents how to bootstrap
various distros under them:

http://impactlinux.com/fwl

So I do have some firsthand experience here.

Fedora provides a whole solution there, with the restriction it's
designed for native build, not cross.
QEMU: it's not just for breakfast anymore.
That's right Qemu often requires lunch, teatime and supper too to build
anything :-)
Which is why you hook it up to distcc so it can call out to the cross
compiler, which speeds up the build and lets you take advantage of SMP.
(Pages 217-226 of the above PDF.)

That's also why my FWL project uses a statically linked version of busybox,
because the static linking avoids the extra page retranslations on each exec
and thus sped up the ./configure stage by 20%. (Pages 235-236 of PDF.)

There's more things you can do to speed it up if you want to go down that
rabbit hole (which the presentation does), and there's more work being done in
qemu. (TCG was originally a performance hit but has improved since, although
it varies widely by platform and is its own rabbit hole. Also switching to
gigabit NIC emulation with jumbo frames helped distcc a lot.)

But in general, Moore's Law says that qemu on current PC hardware is about the
speed of current PC hardware seven years ago. (And obviously nobody ever
built anything before 2003. :)

Newer ARM platforms like Cortex8+ and the Marvell Sheevaplug will
outstrip emulated performance on a normal PC. There are 2GHz multi-core
ARMs coming as well apparently. So I took the view I should ignore Qemu
and get an early start on the true native build that will be the future
of "native build" as opposed to cross due to that.
Pages 24-34 of the above PDF go over this. The first two pages are on the
advantages of native compiling on real hardware, the next eight pages are on
the disadvantages. It can certainly be made to work, especially in a large
corporation willing to spend a lot of money on hardware as a _prerequisite_ to
choosing a deployment platform.

For hobbyists, small businesses, and open source developers in general, there
are significant advantages to emulation. (Page 208 comes to mind.) And if you
_are_ going to throw money at hardware, x86-64 continues to have better
price/performance ratio, which was always its thing.

The point of the distro is you just let them build the bulk of it, just
installing binary packages. You're only rebuilding the bits you are
changing for your application.
Pages 68-71. If your definition of embedded development is using off the shelf
hardware and installing prebuilt binary packages into it, life becomes a lot
easier, sure.

For a lot of cases that's a few small
app packages that are mainly linking against stuff from the distro and
they're not too bad to do natively.
Pages 78-84

(In addition my workflow is to edit
on a host PC
Pages 178-180

and use scripts to teleport a source tree tarball to the
device where it's built as a package every time and installed together
with its -devel,
Pages 181-202

so everything is always under package control).
Package control or source control? (Different page ranges...)

-Andy
Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/27/09 07:27, Somebody in the thread at some point said:

Hi Rob -

Again this is buildroot thinking. The distro provides both the native
and cross toolchains for you. You're going to want to use the same
distro as you normally use on your box so the cross toolchain installs
as a package there.
Because boards that use things like uClibc and busybox just aren't interesting
to you?
I used them both before, but I can say with confidence if the platform will take glibc and bash, most people will expect more complete, in that sense, more reliable, performance from those.

It breaks down at stock distro init because it's painfully slow. But otherwise there are real advantages in having the full-strength versions of everything.

Please don't confuse "development environment" with "build environment". A
Since I didn't use either term, I don't know why you think I'm confusing them.

development environment has xterms and IDEs and visual diff tools and a web
browser and PDF viewer and so on. A build environment just compiles stuff to
produce executables. (Even on x86, your fire breathing SMP build server in the
back room isn't necessarily something you're going to VNC into and boot a
desktop on.)
As I said in the other reply, my workflow is to edit a package's source tree on a host (so you can use any editor on your host not just kate / fish:// ) and by host script with scp and ssh get the current tree package-built and installed on the device in one step.

So I hope it's clear there is solid separation between what you're calling "development environment" and "build environment" to the point they have nothing to do with each other except ssh-based script to get stuff built.

I agree it's nice to have a build environment compatible with your deployment
environment, and distros certainly have their advantages, but you may not want
to actually _deploy_ 48 megabytes of /var/lib/apt from Ubuntu in an embedded
device.
I did say in the thread you want ARM11+ basis and you need 100-200MBytes rootfs space to get the advantages of the distro basis. If you have something weaker (even ARM9 since stock Fedora is ARMv5+ instruction set by default) then you have to do things the old way and recook everything yourself one way or another.

Even now there are plenty of suitable platforms that will work with it, and over time they will only increase. Nothing seems to totally die out (8051-based micros are still in the market) but each time something new comes in at the top it grabs some of the market and the older ones shrink.

It boils down to the point that if you just treat the ARM11+ platforms like the previous generation and stick fat bootloaders and buildroot blobs on them, you are going to miss out on an epochal simplification where embedded Linux largely becomes like desktop Linux in workflow, quality and reliability of update mechanisms, and effort needed to bring up a box / device.

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/27/09 07:17, Somebody in the thread at some point said:

Hi Rob -

Fedora provides a whole solution there, with the restriction it's
designed for native build, not cross.
QEMU: it's not just for breakfast anymore.
That's right Qemu often requires lunch, teatime and supper too to build anything :-)

Newer ARM platforms like Cortex8+ and the Marvell Sheevaplug will outstrip emulated performance on a normal PC. There are 2GHz multi-core ARMs coming as well apparently. So I took the view I should ignore Qemu and get an early start on the true native build that will be the future of "native build" as opposed to cross due to that.

The point of the distro is you just let them build the bulk of it, just installing binary packages. You're only rebuilding the bits you are changing for your application. For a lot of cases that's a few small app packages that are mainly linking against stuff from the distro and they're not too bad to do natively. (In addition my workflow is to edit on a host PC and use scripts to teleport a source tree tarball to the device where it's built as a package every time and installed together with its -devel, so everything is always under package control).

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Rob Landley
 

On Wednesday 23 December 2009 03:29:22 Andy Green wrote:
On 12/23/09 08:56, Somebody in the thread at some point said:

Hi -

yourself because it's the buildroot mindset, that whole task
disappears with a distro basis.
If you don't step into for example toolchain problems or other crazy
things...
Again this is buildroot thinking. The distro provides both the native
and cross toolchains for you. You're going to want to use the same
distro as you normally use on your box so the cross toolchain installs
as a package there.
Because boards that use things like uClibc and busybox just aren't interesting
to you?

Please don't confuse "development environment" with "build environment". A
development environment has xterms and IDEs and visual diff tools and a web
browser and PDF viewer and so on. A build environment just compiles stuff to
produce executables. (Even on x86, your fire breathing SMP build server in the
back room isn't necessarily something you're going to VNC into and boot a
desktop on.)

I agree it's nice to have a build environment compatible with your deployment
environment, and distros certainly have their advantages, but you may not want
to actually _deploy_ 48 megabytes of /var/lib/apt from Ubuntu in an embedded
device.

Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Rob Landley
 

On Tuesday 22 December 2009 16:23:37 Andy Green wrote:
On 12/22/09 11:12, Somebody in the thread at some point said:

Hi Robert -

(Personally I used Fedora ARM port and RPM, but any distro and
packagesystem like Debian workable on ARM would be fine).
Until now, we are using the "build it yourself" approach with ptxdist,
basically because of these reasons:

- If something goes wrong, we want to be able to fix it, which means
that we must be able to recompile everything. Having the source is no
value by itself, if you are not able to build it.
Fedora provides a whole solution there, with the restriction it's
designed for native build, not cross.
QEMU: it's not just for breakfast anymore.

Rob
--
Latency is more important than throughput. It's that simple. - Linus Torvalds


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Robert Schwebel
 

On Wed, Dec 23, 2009 at 09:29:22AM +0000, Andy Green wrote:
If you don't step into for example toolchain problems or other crazy
things...
Again this is buildroot thinking. The distro provides both the native
and cross toolchains for you. You're going to want to use the same
distro as you normally use on your box so the cross toolchain installs
as a package there.

If all that's left is the risk of "crazy things" happening well that's
a risk whatever you do or even if you do nothing :-)
What I mean is that in the past we had more than once the case that
we've found showstopper bugs in upstream gcc, binutils etc. (ARM has
much less coverage than x86). It was never a problem in the project,
because we actually have been able to port test cases to the latest gcc,
reproduc, report to upstream, get it fixed, port it back. That at least
implies that you *are* able to rebuild everything (which is possible in
both ways, of course).

The only thing I know of that matches "outside the kernel" requirement
is the machine ID that's passed in on ATAG. I agree it's generally
good to have a single build that's multipurpose.
Machine ID is a good concept, but doesn't work on everything. What's the
machine on a system that consists of a cpu module and a baseboard?
Things move into the oftree direction, and I think bootloaders have to
deal with that, or there has to be a possibility that an early kernel
stage deals with the oftree. But that just moves the hackery-code from
the bootloader into the kernel, which is no gain.

On iMX you have to go read IIM to get device info but actually that's
not hard.
... as long as the hardware guys don't decide in their great wisdom to
change the meaning of some bits from chip to chip ...

But that device ID will itself alone tell you the on-SoC peripherals
since there's an ID per die; it makes no difference if this expanded
SoC oftree data itself lives in the kernel then. The non-SoC stuff
can just be probed.
Off-board devices behind SPI, I2C and chip select busses cannot be
probed.

You are observing a subset of the embedded universe, and you are right
if you limit to it. Bootloaders like barebox or u-boot provide a concept
for *all* use cases.

rsc
--
Pengutronix e.K. | |
Industrial Linux Solutions | http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/23/09 08:56, Somebody in the thread at some point said:

Hi -

yourself because it's the buildroot mindset, that whole task
disappears with a distro basis.
If you don't step into for example toolchain problems or other crazy
things...
Again this is buildroot thinking. The distro provides both the native and cross toolchains for you. You're going to want to use the same distro as you normally use on your box so the cross toolchain installs as a package there.

If all that's left is the risk of "crazy things" happening well that's a risk whatever you do or even if you do nothing :-)

The oftree is currently provided by the bootloader, and much of what it
contains is unprobable peripherals, i.e. the IP cores in the SoC cpus.
For example for i.MX (which we happen to maintain in the mainline),
there is a strong aim for having one kernel that runs on as many devices
as possible. If you want to do this and if you can't probe significant
parts of the hardware, you need an instance outside of the kernel who
tells you what's actually there.
The only thing I know of that matches "outside the kernel" requirement is the machine ID that's passed in on ATAG. I agree it's generally good to have a single build that's multipurpose.

On iMX you have to go read IIM to get device info but actually that's not hard. But that device ID will itself alone tell you the on-SoC peripherals since there's an ID per die; it makes no difference if this expanded SoC oftree data itself lives in the kernel then. The non-SoC stuff can just be probed.

We have customers who care about "splash in 0.5 s" vs. "shell runs after
3 s, then qt starts". People may be used to that kind of noticable boot
That's fine, they can pay for the extra time to market and work done on the crap in their bootloader :-) Many customers don't care if it acts in line with their general expectation and was delivered faster and cheaper than they expected.

Do you remember the times when we had analog TV? We could zap through 5
But your CRT took a while to warm up / "boot" as well...

-Andy


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Robert Schwebel
 

On Wed, Dec 23, 2009 at 08:38:08AM +0000, Andy Green wrote:
I don't know or care when I get the binary packages from a repo where
they're already built. The whole point of a distro solution is someone
did all the work for you. You're only thinking about mass rebuild
yourself because it's the buildroot mindset, that whole task
disappears with a distro basis.
If you don't step into for example toolchain problems or other crazy
things...

You can emulate "issue 6 monthly rootfs tarball updates" by just
updating the stable package repo at long intervals with well-tested
packagesets. At the same time you can offer other repos with newer
features earlier, get changed packages tested easier, confirm
patchlevel on test systems, etc.
Yes, that's a valuable option.

I take your point but actually there's no reason the *bootloader*
needs that when the bootloader is focussed solely on booting Linux.
*Linux* might want an equipment list from the board, but then
typically you would build all the drivers and they can simply probe
and fail if its not there on the board.
The oftree is currently provided by the bootloader, and much of what it
contains is unprobable peripherals, i.e. the IP cores in the SoC cpus.
For example for i.MX (which we happen to maintain in the mainline),
there is a strong aim for having one kernel that runs on as many devices
as possible. If you want to do this and if you can't probe significant
parts of the hardware, you need an instance outside of the kernel who
tells you what's actually there.

I'm not sure I managed to give the flavour of a bunch of hardware guys
half a world away rotating in and out on Military service. Even
patches internally aren't happening, Mainline isn't an answer.
Well, there are many projects out there which are not so secret that one
cannot expose the kernel drivers. And even if they are, it is possible
to establish a peer-review culture inside of a corporation. But you
won't get the full power of community review. That's the trade off one
has to accept for having secrets :-) But quality is generally a big
issue.

bus driver back into the bootloader. But actually, normal
customers don't care about 200ms on boot either way. They can get
the thing to market quicker and so cheaper and more reliably
without that stuff in the bootloader.
That's a matter of the definition of "normal customers" :-)
What I mean by it is for geeks like us, it's interesting to see how
fast it will go. The actual customer cannot tell 200ms by eye he will
accept it if it's not passing his threshold of being "too slow". But
he will like getting it shipping earlier because the bootloader is
almost invisible in dev effort and in management of production.
We have customers who care about "splash in 0.5 s" vs. "shell runs after
3 s, then qt starts". People may be used to that kind of noticable boot
time in the phone business, but in the industry (where embedded Linux
boxes are even more "devices" than computers) they often are not.

Do you remember the times when we had analog TV? We could zap through 5
channels in under 3 seconds. *That's* performance :-) My sattelite
receiver needs about 10 seconds to boot. Sometimes it feels like
innovation goes backwards.

Cheers,
rsc
--
Pengutronix e.K. | |
Industrial Linux Solutions | http://www.pengutronix.de/ |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |


Re: CELF Project Proposal- Refactoring Qi, lightweight bootloader

Andy Green <andy@...>
 

On 12/23/09 02:28, Somebody in the thread at some point said:

Hi -

No TCP/IP, no TFTP, not even BOOTP (but it's a nice bonus), no command
line interpreter (just a GPIO on board to boot into "unbrick me" mode
:-), and most strikingly _no_ flash driver for flash chip du jour.

To flash it you send a kernel to boot from RAM which is capable of
flashing it.
Sorry I missed where this kernel appears from and the bootloader that spawned it, since both could get trashed. That is actually a conundrum on a lot of systems and some of the solutions (write-once backup bootloader) in the long run lead to other issues.

True SD Boot does truly deliver unbrickability if you are willing to swap out or reformat the SD card.

http://wiki.openmoko.org/wiki/Qi
Looking at the screen shot there, you've got code to parse ext2 filesystems.
What is your definition of "minimal"?
Ew, ext2 doesn't even satisfy powerfail-during-kernel-upgrade safety.
It's just misleading (but accurate). ext2 is the "lowest common denominator" read-only parsing that actually supports ext3 and ext4 if you are careful about the formatting options. So the actual filesystem is ext3 or ext4 typically (ext3 in GTA02 case), it's not that the bootloader is mandating specifically ext2.

I agree it does beg the question of what is "minimal".

The proposal did explain quite well what Qi aims for: not duplicating
lots of kernel drivers badly. If it succeeds in the area of flash
writing, network drivers, network protocols and so on it would be no
bad thing.
Thanks.

One area for potential common ground among bootloaders could be to
share the code for parsing filesystems. It'd be great to see that in
a library shared by GRUB, Qi, U-boot and so on as it's not device
specific at all and not particularly large, but not so trivial that
it's good to have lots of clones.
Yeah it's not a bad idea.

It's possible to boot without parsing filesystems, but that is one
rather nice feature, and with the right filesystems it can make system
updates powerfail-safe.
Bootloader is tricky, but actually on this iMX31 device Fedora is used, yum update keeps the last 3 kernels around and our kernel package follows that. So it's possible to have backup kernels automatically integrated into the bootloader and packaging system.

Rationale for not providing a boot menu is you don't want to mess with video
init. I don't think I've actually seen an embedded bootloader that messes
with video, they do serial console instead, and you have a screen shot of
serial console messages so apparently the serial driver part is there...
In perspective, serial is usually quite simple. Output only serial is
even simpler, though :-)
Totally agree!

-Andy

1001 - 1020 of 1279