Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run translation and LLVM in parallel when compiling with multiple CGUs #43506

Merged
merged 29 commits into from
Aug 1, 2017
Merged
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
c4adece
async-llvm(1): Run LLVM already in trans_crate().
michaelwoerister Jul 21, 2017
29d4725
async-llvm(2): Decouple diagnostics emission from LLVM worker coordin…
michaelwoerister Jul 21, 2017
bac57cf
async-llvm(3): Make write::CodegenContext Clone and Send.
michaelwoerister Jul 24, 2017
df6be33
async-llvm(4): Move work coordination to separate thread in order to …
michaelwoerister Jul 24, 2017
b18a61a
async-llvm(5): Do continuous error handling on main thread.
michaelwoerister Jul 24, 2017
8f6894e
async-llvm(6): Make the LLVM work coordinator get its work package th…
michaelwoerister Jul 24, 2017
4282dd8
async-llvm(7): Clean up error handling a bit.
michaelwoerister Jul 24, 2017
645841e
async-llvm(8): Clean up resource management and drop LLVM modules ASAP.
michaelwoerister Jul 25, 2017
ccb970b
async-llvm(9): Move OngoingCrateTranslation into back::write.
michaelwoerister Jul 26, 2017
28589ec
async-llvm(10): Factor compile output files cleanup into separate fun…
michaelwoerister Jul 26, 2017
f3ce505
async-llvm(11): Delay joining ongoing translation until right before …
michaelwoerister Jul 26, 2017
397b2a8
async-llvm(12): Hide no_integrated_as logic in write::run_passes.
michaelwoerister Jul 26, 2017
b924ec1
async-llvm(13): Submit LLVM work packages from base::trans_crate().
michaelwoerister Jul 26, 2017
a1be658
async-llvm(14): Move LTO/codegen-unit conflict check to beginning of …
michaelwoerister Jul 26, 2017
943a5bd
async-llvm(15): Don't require number of codegen units upfront.
michaelwoerister Jul 26, 2017
0ad9eaa
async-llvm(16): Inject allocator shim into LLVM module immediately if…
michaelwoerister Jul 26, 2017
e7d0fa3
async-llvm(17): Create MSVC __imp_ symbols immediately for each module.
michaelwoerister Jul 26, 2017
7e09d1e
async-llvm(18): Instantiate OngoingCrateTranslation before starting t…
michaelwoerister Jul 26, 2017
81b789f
async-llvm(19): Already start LLVM while still translating.
michaelwoerister Jul 26, 2017
ab3bc58
async-llvm(20): Do some cleanup.
michaelwoerister Jul 26, 2017
1480be3
async-llvm(21): Re-use worker-ids in order to simulate persistent wor…
michaelwoerister Jul 27, 2017
8819278
async-llvm(22): mw invokes mad html skillz to produce graphical LLVM …
michaelwoerister Jul 27, 2017
f5acc39
async-llvm(23): Let the main thread also do LLVM work in order to red…
michaelwoerister Jul 27, 2017
bd36df8
async-llvm(24): Improve scheduling and documentation.
michaelwoerister Jul 28, 2017
a9a0ea9
async-llvm(25): Restore -Ztime-passes output for trans and LLVM.
michaelwoerister Jul 31, 2017
cacc31f
async-llvm(26): Print error when failing to acquire Jobserver token.
michaelwoerister Jul 31, 2017
b1e043e
async-llvm(27): Move #[rustc_error] check to an earlier point in orde…
michaelwoerister Jul 31, 2017
b8d4413
async-llvm(28): Make some error messages more informative.
michaelwoerister Aug 1, 2017
6468cad
async-llvm(29): Adapt run-make/llvm-phase test case to LLVM module no…
michaelwoerister Aug 1, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
async-llvm(4): Move work coordination to separate thread in order to …
…free up the main thread for translation.
  • Loading branch information
michaelwoerister committed Jul 31, 2017
commit df6be33d84f14c286689938eb2a2686315926e9f
124 changes: 61 additions & 63 deletions src/librustc_trans/back/write.rs
Original file line number Diff line number Diff line change
@@ -28,7 +28,6 @@ use syntax::ext::hygiene::Mark;
use syntax_pos::MultiSpan;
use context::{is_pie_binary, get_reloc_model};
use jobserver::{Client, Acquired};
use crossbeam::{scope, Scope};
use rustc_demangle;

use std::cmp;
@@ -38,8 +37,10 @@ use std::io;
use std::io::Write;
use std::path::{Path, PathBuf};
use std::str;
use std::sync::Arc;
use std::sync::mpsc::{channel, Sender, Receiver};
use std::slice;
use std::thread;
use libc::{c_uint, c_void, c_char, size_t};

pub const RELOC_MODEL_ARGS : [(&'static str, llvm::RelocMode); 7] = [
@@ -283,13 +284,13 @@ impl ModuleConfig {

/// Additional resources used by optimize_and_codegen (not module specific)
#[derive(Clone)]
pub struct CodegenContext<'a> {
pub struct CodegenContext {
// Resouces needed when running LTO
pub time_passes: bool,
pub lto: bool,
pub no_landing_pads: bool,
pub exported_symbols: &'a ExportedSymbols,
pub opts: &'a config::Options,
pub exported_symbols: Arc<ExportedSymbols>,
pub opts: Arc<config::Options>,
pub crate_types: Vec<config::CrateType>,
pub each_linked_rlib_for_lto: Vec<(CrateNum, PathBuf)>,
// Handler to use for diagnostics produced during codegen.
@@ -307,18 +308,18 @@ pub struct CodegenContext<'a> {
pub coordinator_send: Sender<Message>,
}

impl<'a> CodegenContext<'a> {
impl CodegenContext {
fn create_diag_handler(&self) -> Handler {
Handler::with_emitter(true, false, Box::new(self.diag_emitter.clone()))
}
}

struct HandlerFreeVars<'a> {
cgcx: &'a CodegenContext<'a>,
cgcx: &'a CodegenContext,
diag_handler: &'a Handler,
}

unsafe extern "C" fn report_inline_asm<'a, 'b>(cgcx: &'a CodegenContext<'a>,
unsafe extern "C" fn report_inline_asm<'a, 'b>(cgcx: &'a CodegenContext,
msg: &'b str,
cookie: c_uint) {
cgcx.diag_emitter.inline_asm_error(cookie as u32, msg.to_string());
@@ -775,9 +776,8 @@ pub fn run_passes(sess: &Session,
let num_workers = cmp::min(work_items.len() - 1, 32);
Client::new(num_workers).expect("failed to create jobserver")
});
scope(|scope| {
execute_work(sess, work_items, client, &trans.exported_symbols, scope);
});

execute_work(sess, work_items, client, trans.exported_symbols.clone());

// If in incr. comp. mode, preserve the `.o` files for potential re-use
for mtrans in modules.iter() {
@@ -1052,11 +1052,10 @@ pub struct Diagnostic {
lvl: Level,
}

fn execute_work<'a>(sess: &'a Session,
mut work_items: Vec<WorkItem>,
jobserver: Client,
exported_symbols: &'a ExportedSymbols,
scope: &Scope<'a>) {
fn execute_work(sess: &Session,
mut work_items: Vec<WorkItem>,
jobserver: Client,
exported_symbols: Arc<ExportedSymbols>) {
let (tx, rx) = channel();
let tx2 = tx.clone();

@@ -1092,7 +1091,7 @@ fn execute_work<'a>(sess: &'a Session,
each_linked_rlib_for_lto: each_linked_rlib_for_lto,
lto: sess.lto(),
no_landing_pads: sess.no_landing_pads(),
opts: &sess.opts,
opts: Arc::new(sess.opts.clone()),
time_passes: sess.time_passes(),
exported_symbols: exported_symbols,
plugin_passes: sess.plugin_llvm_passes.borrow().clone(),
@@ -1158,68 +1157,67 @@ fn execute_work<'a>(sess: &'a Session,
// Before that work finishes, however, we may acquire a token. In that case
// we actually wastefully acquired the token, so we relinquish it back to
// the jobserver.
let mut tokens = Vec::new();
let mut running = 0;
while work_items.len() > 0 || running > 0 {

// Spin up what work we can, only doing this while we've got available
// parallelism slots and work left to spawn.
while work_items.len() > 0 && running < tokens.len() + 1 {
let item = work_items.pop().unwrap();
let worker_index = work_items.len();

let cgcx = CodegenContext {
worker: worker_index,
.. cgcx.clone()
};

spawn_work(cgcx,
scope,
item);
running += 1;
}
thread::spawn(move || {
let mut tokens = Vec::new();
let mut running = 0;
while work_items.len() > 0 || running > 0 {

// Relinquish accidentally acquired extra tokens
tokens.truncate(running.saturating_sub(1));
// Spin up what work we can, only doing this while we've got available
// parallelism slots and work left to spawn.
while work_items.len() > 0 && running < tokens.len() + 1 {
let item = work_items.pop().unwrap();
let worker_index = work_items.len();

match rx.recv().unwrap() {
// Save the token locally and the next turn of the loop will use
// this to spawn a new unit of work, or it may get dropped
// immediately if we have no more work to spawn.
Message::Token(token) => {
tokens.push(token.expect("failed to acquire jobserver token"));
}
let cgcx = CodegenContext {
worker: worker_index,
.. cgcx.clone()
};

// If a thread exits successfully then we drop a token associated
// with that worker and update our `running` count. We may later
// re-acquire a token to continue running more work. We may also not
// actually drop a token here if the worker was running with an
// "ephemeral token"
//
// Note that if the thread failed that means it panicked, so we
// abort immediately.
Message::Done { success: true } => {
drop(tokens.pop());
running -= 1;
spawn_work(cgcx, item);
running += 1;
}
Message::Done { success: false } => {
shared_emitter.fatal("aborting due to worker thread panic".to_string());

// Relinquish accidentally acquired extra tokens
tokens.truncate(running.saturating_sub(1));

match rx.recv().unwrap() {
// Save the token locally and the next turn of the loop will use
// this to spawn a new unit of work, or it may get dropped
// immediately if we have no more work to spawn.
Message::Token(token) => {
tokens.push(token.expect("failed to acquire jobserver token"));
}

// If a thread exits successfully then we drop a token associated
// with that worker and update our `running` count. We may later
// re-acquire a token to continue running more work. We may also not
// actually drop a token here if the worker was running with an
// "ephemeral token"
//
// Note that if the thread failed that means it panicked, so we
// abort immediately.
Message::Done { success: true } => {
drop(tokens.pop());
running -= 1;
}
Message::Done { success: false } => {
shared_emitter.fatal("aborting due to worker thread panic".to_string());
}
}
}
}).join().unwrap();

shared_emitter_main.check(sess);
}
shared_emitter_main.check(sess);

// Just in case, check this on the way out.
sess.diagnostic().abort_if_errors();
}

fn spawn_work<'a>(cgcx: CodegenContext<'a>,
scope: &Scope<'a>,
work: WorkItem) {
fn spawn_work(cgcx: CodegenContext, work: WorkItem) {
let depth = time_depth();

scope.spawn(move || {
thread::spawn(move || {
set_time_depth(depth);

// Set up a destructor which will fire off a message that we're done as