On July 19, 2015 5:24:25 AM CDT, "André Marques" <andre.lousa.marq...@gmail.com> wrote: >On 18-07-2015 20:04, Joel Sherrill wrote: >> On July 18, 2015 1:23:13 PM CDT, "André >Marques"<andre.lousa.marq...@gmail.com> wrote: >>> Hello all, >>> >>> as previously pointed by Gedare, the interrupt handling code in the >>> RTEMS GPIO API can benefit with the use of a rtems interrupt server. >>> The >>> current design and implementation details of the API can be seen in >my >>> GSOC blog >>> >(https://asuolgsoc2014.wordpress.com/2015/07/14/rtems-gpio-api-status/). >>> >>> To summarize the GPIO interrupt handling requirements it is >important >>> to >>> note that a GPIO bank is composed of several pins all triggering a >>> common interrupt vector, each pin being a separate interrupt source, >>> which may in turn have one or more handlers of their own. This does >not >>> >>> mean, however, that the pins interrupt handling is completely >>> independent as although each pin has their own handler, we can not >>> simply call all pin handlers on a GPIO bank every time a interrupt >is >>> sensed on the vector, as the pins themselves do not have interrupt >>> state >>> data. >>> >>> The active/pending interrupts on a GPIO bank are stored on a >register, >>> meaning that every time an interrupt is sensed on an interrupt >vector >>> the interrupt vector handler (which should be unique) will have to >read >>> >>> the register to know which pins have pending interrupts, and then >call >>> the respective handler/handlers. >>> >>> Each pin may have more than one handler if more than one device is >>> connected to the same pin, in which case each device will have a >>> handler >>> which should begin by probing their corresponding device to know if >the >>> >>> culprit is their device, and if so handle it. >>> >>> The pin's handlers are stored on an internal (to the API) chain, and >>> called sequentially as in this case each connected device will have >to >>> be probed to know the origin of the interrupt (shared interrupt >handler >>> >>> behavior), and these handlers are dealt with by the API. >>> >>> Regarding the handling at the vector level, each bank currently has >an >>> unique rtems interrupt handler which probes the interrupt line on >that >>> bank, and calls (wakes) the corresponding pin handler task on any >pin >>> with a pending interrupt. >>> >>> This is the part that can be replaced with a rtems interrupt server, >as >>> >>> each bank can have the "same" unique handler installed on the server >>> through a call to rtems_interrupt_server_handler_install, which will >>> put >>> the vector/bank handler on a chain to be processed on the server >task >>> (which is waked every time an interrupt is sensed). The advantage >>> relative to my current implementation is that this allows a single >task >>> >>> to call every handler, instead of waking a task per pin which calls >the >>> >>> corresponding pin handler(s) which is unnecessary overhead on a >single >>> core system (a SMP system could benefit of multiple tasks/servers to >>> allow interrupt handling parallelism, although the current interrupt >>> server implementation only allows a single server in the whole >system. >>> In this situation it would also be useful if the vector was >re-enabled >>> as soon as possible so any interrupts generated during the handling >of >>> a >>> pin's interrupt can be quickly handled in parallel, instead of >waiting >> >from the previous interrupts to be processed - remember that in a >GPIO >>> vector each pin interrupt is independent - unless it is an interrupt >on >>> >>> the same pin). The handler that will be called by the rtems >interrupt >>> server task will probe and clear the GPIO bank interrupt line, and >call >>> >>> the necessary pin handlers before allowing the interrupt server to >>> re-enable the vector. >>> >>> Apart from this, the API may also allow interrupts on a given >>> vector/bank to be non-threaded (which is to say that they are >handled >>> on >>> a regular interrupt handler, with the advantages/restrictions of an >ISR >>> >>> environment). >>> >>> This is the current work plan regarding the GPIO interrupt handling >in >>> the RTEMS GPIO API, so if anyone has any issue with it do let me >know. >> For single pins that someone wants to map to an ISR, this sounds >good. But it is not uncommon to have a set of pins that are a single >logical source of interrupts. >> >> I have seen a case where multiple pins indicated the state of a 1024 >position encoder. And other uses of 2-4 pins for device state. >> >> How will these cases be dealt with? > >*If I understood correctly* the point is that a set of pins may perform > >the same action if any of them generates an interrupt, so in practice >they will share the same handler. The question would then be how to >synchronize the interrupt handling, such that only one handler instance > >is executing at a time. With the use of the interrupt server, since >only >one task will call the handlers in sequence, if such a set of pins is >contained in a single vector/bank then the API already handles it >because even if two pins fire an interrupt the handlers will be called >one at a time, and the vector is disabled during this period.
Ok. So just install one handler for multiple pins in a bank. And check all pins when invoked? >In fact, since the API requires a separate table for interrupt >configuration from the broader pin configuration table, it is possible >to define a single interrupt configuration and use that same table to >configure all the pins in this situation. I don't know if one handler per bank would help directly. In the past, I used something like this and added a layer that checked for changes and fired change handlers for specific big sets which changed. For example, a 16 bit bank could be logically two 4-bit inputs, a single 2-bit input, and 6 single bit inputs. One interrupt handler could handle the HW interrupt and then invoke the appropriate change handlers. This just required registering a mask and handler. The single ISR iterated over a table of that information. >A problem may happen however if this set of pins span across more than >one vector/bank, as we might have two instances of the same handler in >a >race condition. The solution may be to have a mutex per "logical >interrupt" as we might call this use case. I wouldn't worry about this. I am sure some perverse HW designer has done it but it is easy enough to add application logic in the unlikely event one finds this. Short version, I think this is rare enough not to worry about. >The API can currently handle pin groupings/sets as single entities, and > >it uses a mutex to synchronize the group operations since it can span >multiple interrupt vectors, but considering that a group could also >have >multiple logical sources of interrupts, it may not make much sense to >use a grouping to deal with this situation, specially because the pin >groupings intention is that someone will interact directly with the >group (reading/writing). > >A similar process can however be used. > >The idea would be to have an opaque type such as >rtems_gpio_logical_interrupt which would be defined as: > >struct rtems_gpio_logical_interrupt >{ > rtems_chain_node node; > rtems_id mutex; >} > >Then the BSP/application could create a logical interrupt with >something >like > >rtems_gpio_logical_interrupt >*rtems_gpio_create_logical_interrupt(void); > >Which would return a pointer to a structure instance with a mutex. > >Then each interrupt configuration table could have a field for this >pointer, which when not NULL would require the mutex to be acquired >before calling the handler, hence synchronizing the access to the >handler to any pin sharing the same handler with some other pin(s). > >>> After this is done another round of code review may start. >>> >>> Thank you for your time, >>> André Marques. >>> _______________________________________________ >>> devel mailing list >>> devel@rtems.org >>> http://lists.rtems.org/mailman/listinfo/devel >> --joel --joel _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel