Re: [PATCH 1/2] RTEMS GPIO API definition and implementation.
On Tue, Jun 23, 2015 at 6:44 PM, André Marques wrote: > On 23-06-2015 17:16, Gedare Bloom wrote: > + uint32_t bank; > + int handled_count; > + int rv; > + > + gpio = (gpio_pin*) arg; > + > + bank = gpio->bank_number; Validate args for errors. >>> >>> >>> These tasks are only created by the rtems_gpio_enable_interrupt() >>> function, >>> and all parameters are validated before creating/starting the task. >>> Should >>> they be validated again here? >>> >> No, as long as validated in the entry-ways to the module it should be >> fine. >> >> > + > +AQUIRE_LOCK(bank_lock[bank]); What is the lock protecting? >>> >>> >>> Since the rtems_gpio_disable_interrupt() kills this task when the >>> interrupt >>> is disabled, having the lock here (which rtems_gpio_disable_interrupt() >>> must >>> also acquire before doing anything) ensures that any interrupt disable >>> call >>> occurring during an interrupt process will have to wait while the >>> on-going >>> interrupt is processed. Otherwise the handler could leave the system on >>> an >>> inconsistent state. I shall make this clear with a comment. >>> >> Do we really want the task to be killed? Anyway, the naming of this >> "gpio_disable_interrupt" and "gpio_enable_interrupt" is a bit >> confusing to me, since it really is about creating/destroying the >> interrupt handling thread? I don't quite understand the design there. > > > When an interrupt is enabled on a pin, a task is created to call its > handler/handlers (this task is put to sleep and woken by the bank's ISR each > time an interrupt on that pin is sensed). If that interrupt is disabled on > that pin, the task is killed because the pin will not be providing more > interrupts. An instance of this task is created for every pin with an > interrupt enabled. > Does this mean there is potentially one task for every pin? Why can't the task be shared? At least among the banks then you only need pins/32 max tasks. >>> As for the gpio[0], the gpio pointer points to a whole gpio bank, so >>> gpio[0] >>> is the first pin of this bank. Since all pins in this bank have the same >>> bank number, I can use any pin from it to obtain the bank number that >>> this >>> ISR is listening. Maybe gpio should be renamed to gpio_bank. >>> >> Ah. Something bugs me about redundantly storing the same bank number >> 32 times... Is there a better data structure for you to use to keep >> track of all the pins and their metadata / configurations? > > > There may be a few alternatives: > > 1. With the current bank*bank_pins matrix structure, since I always need to > have the bank and pin number to access a GPIO pin I can ommit these values > from the pin struct. This, however, requires about two divisions to > calculate the bank/pin number combination, but saves 64 bits per pin in > memory. > Generally not worth trading division for memory. Supposing 64 pins, we're talking about 512 bytes overhead totally. But, it would be good to get a grasp on the memory costs of your structures for tracking pins. > 2. Use the uni-dimensional array with direct access with pin_number (see the > end of this e-mail), and keep the bank/pin data for each pin struct, but now > I do not have to calculate the bank/pin from pin_number. > > 3. The application could use the bank/pin number directly to refer to a pin, > instead of the global pin number. However, if the bsp wants to define a > series of constants for each pin, it would become more difficult since it > would have to store two numbers (#define GPIO_40 40 would not be possible), > so each platform can either not define any pin reference (each application > hardcodes the pin reference on their configuration), or it would have to > resort to a table of structs, probably facing this same problem. > May be better for the application interface to be opaque by using some kind of descriptor, either a pin number or an opaque type. > Maybe option 3 is better with the hardcoded reference to the bank/pin > combination, otherwise I feel like the only option becomes a choice between > 1 or 2, or moving the problem around. The usefulness of having platform > defined references to the pins may also be arguable. > Somehow an application has to know what pins to use. BSPs can provide some specific information, such as known interfaces (LEDs are a simple example). Ultimately though, it may be simplest for BSP writers to provide a document that maps between global pin numbers and pins on the board, so that users know where to attach wires. Sorting this problem out is worthwhile, and might be good to discuss in a separate email where others might catch it. > + > + /* If the function was successfuly assigned to the pin, > + * record that information on the gpio_pin_state structure. */ > + gpio_pin_state[bank][pin].pin_function = function; > + gpio_pin_state[bank][pin].logic_invert = logic_invert; > + > + if ( func
Re: [RPI BSP] zero length array in kernel and refactor of video char output
On Wed, Jun 24, 2015 at 2:31 AM, QIAO YANG wrote: > > On Jun 23, 2015, at 07:15 AM, Gedare Bloom wrote: > > On Tue, Jun 23, 2015 at 7:43 AM, QIAO YANG wrote: > > Hi, > > > As suggested by gedare, I think using zero length array to represent the > > mailbox buffer and tag data is a good way, much readable, clearer to > > abstract the structure of mailbox buffer, tag. > > I've done an attemp, here is the scratch: > > https://github.com/yangqiao/rtems/commit/3ed7e9bde493bdc8e644fcefa285d99255201ada > > > The construction of a buffer is then decomposed by the following procedure: > > 1. Calculate the total length of buffer > > 2. Allocate and inite the buffer > > 3. Pack the request data into buffer > > 4. send the buffer by mailbox, then read the responce > > 5. Unpack the responce data into variables > > > I've tested it in userspace it works well but in kernel space I cannot > > allocate the memory by malloc. Is there any alternative way to let us use > > zero length array in the kernel driver? > > > It depends. BSP code can use malloc, but care should be taken about > where you use it. An alternative would be to use a free list. > > > I've retried to allocate zero-length-array statically and I've found some > problems: > 1. As described in GCC manual, the size of struct that contains an zero > length array at the end, is determined by the size of the initializer's > given array. But what I've found is that the the initializer don't > initialize the zero length array. it has to be set afterward. I doubt that > the problem comes from the cross compiler. Even if I don't give initializer > for the array, I can get access to the elements. It's dangerous and it > doesn't work as described in manual. > > 2. I'm afraid that a struct A that contains a zero length array of struct B, > where struct B has a zero-length array, is not acceptable. As the second > struct B2 doesn't know the length of B1, so the position of B2 would be > shifted and override part of the structure B1. > Yeah I saw this #2 just yesterday too. Good job. > So I propose to: > 1. Don't queue the tags, we handle them one by one. one buffer , one tag. > 2. Use the zero length array to define the structure of tag. > 3. single function for each tag operation. > 4. lock to ensure that only one function is in process to avoid mailbox > conflict. > Makes sense. > https://github.com/yangqiao/rtems/commit/971d3ccdab04171494a3b73684f4f6f243e230b9 > I've only implement the function to get display size here. If it's ok I'll > add some others. > I made a few comments but I think the idea is good for moving forward. Gedare ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
configuring libbsd for em NIC driver
Hi As I posted, I have the pc386 up to the point where I can run the non-network tests with one intermittent failure. I am pretty sure Mark wants the em driver found in freebsd/sys/dev/e1000/ (em driver). FWIW it looks like qemu simulates a handful of useful NICs. I would like to also be able to exercise them. qemu: Supported NIC models: ne2k_pci,i82551,i82557b,i82559er,rtl8139,e1000,pcnet,virtio This covers the em, fxp, rtl, and AMD PCNET Lance. That's would cover a lot of the PC territory. So how does one configure this in the stack? And bring it up enough to see that the NIC is found. Thanks. -- Joel Sherrill, Ph.D. Director of Research & Development joel.sherr...@oarcorp.comOn-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available(256) 722-9985 ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: configuring libbsd for em NIC driver
- Joel Sherrill schrieb: > Hi > > As I posted, I have the pc386 up to the point where I can > run the non-network tests with one intermittent failure. > > I am pretty sure Mark wants the em driver found in > freebsd/sys/dev/e1000/ (em driver). > > FWIW it looks like qemu simulates a handful of useful > NICs. I would like to also be able to exercise them. > > qemu: Supported NIC models: > ne2k_pci,i82551,i82557b,i82559er,rtl8139,e1000,pcnet,virtio > > This covers the em, fxp, rtl, and AMD PCNET Lance. That's > would cover a lot of the PC territory. > > So how does one configure this in the stack? And bring > it up enough to see that the NIC is found. You have to add a "pci" child device to the "nexus" device and make sure that the desired driver is referenced. See also rtemsbsd/include/bsp/nexus-devices.h -- Sebastian Huber, embedded brains GmbH Address : Dornierstr. 4, D-82178 Puchheim, Germany Phone : +49 89 189 47 41-16 Fax : +49 89 189 47 41-09 E-Mail : sebastian.huber at embedded-brains.de PGP : Public key available on request. Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG. ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel