We currently have a model where each eTLD has a separate event queue. This is an improvement over Gecko (one event queue to rule them all), but I suspect we can do better. Specifically, I'm interested in moving to isolated event queues per document, then doing round-robin event processing on each on in an eTLD group. That would make the script event loop look something like this:

enum ScriptMsg {
  AttachDocument(Option<ParentInfo>, DocumentInfo),
  DetachDocument(DocumentInfo),
  InputEvent(EventData),
  ...
}

struct ScriptTask {
  // all documents currently residing in memory for this eTLD
  documents: HashMap<PipelineId, JS<Document>>,
  // queue for events that cannot be processed by a single document
  eTLD_control_port: Receiver<ScriptMsg>,
  // documents that still require a chance to run events
  not_yet_processed: Vec<PipelineId>,
}

impl ScriptTask {
  fn run(&mut self) {
    loop {
      self.run_one_event();

      // simple round-robin event processing
      if self.not_yet_processed.is_empty() {
        self.not_yet_processed = self.documents.keys().collect();
      }

      let current = self.not_yet_processed.pop();
      let document = documents.get(current);
      if !document.should_run_events() {
        continue;
      }

      document.run_one_event();
    }
  }

  fn run_one_event(&mut self) {
    // process a single event if it is available
    if let Ok(msg) = self.eTLD_control_port.try_recv() {
      match msg {
        InputEvent(data) => {
          // let document be the appropriate target for the event
          document.queue_user_input(data);
        }
        ...
      }
    }
  }
}

struct Document {
  network_task_source: Receiver<NetworkMsg>,
  timer_task_source: Receiver<TimerMsg>,
  user_input_task_source: Receiver<UserInputMsg>,
  history_traversal_task_source: Receiver<HistoryTraversalMsg>,
  dom_manipulation_task_source: Receiver<DOMManipulationMsg>,
}

impl Document {
  fn should_run_events(&mut self) {
// can choose to throttle events (eg. every 1/N), or not run them at all
  }

  fn run_one_event(&mut self) {
// This could also be a more complicated decision to prioritize user input, for example.
    select! {
msg = self.network_task_source.try_recv() => self.process_network_task(msg), msg = self.timer_task_source.try_recv() => self.process_timer_task(msg),
      // ...
    }
  }
}


This appears to satisfy the requirements of the spec (c.f. https://html.spec.whatwg.org/multipage/webappapis.html#processing-model-9 & https://html.spec.whatwg.org/multipage/webappapis.html#task-source). My main concern is the loss of deterministic ordering here - with a single event queue, any event that is queued before another is guaranteed to run before it as well. In this new model, if two events are added to different queues in the same document, they could run in any order. This could be fixed by always running the oldest event available in any single document's queue, but are there dangers of out-of-order events between same-origin documents? Thoughts?

Cheers,
Josh
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to