• 海鸥vp官网-旋风加速度器

    Syntactic ambiguity resolution in the GHC parser

    一sub网络加速器

    海鸥vp官网-旋风加速度器

    There are places in the Haskell grammar where it's not known apriori whether it's an expression a command or a pattern that is being parsed. This used to be handled by picking a parse (e.g. as an expression say) and if that choice later turned out to be wrong, "rejigging it" (transform the constructed parse tree to its analog in the pattern language). The problem with that approach is that it meant having conflated sub-languages meaning, for example, sub网络加速器下载 had to have pattern related constructors e.g. EWildPat, EAsPat (and further, these propogated into other compiler phases like the renamer and typechecker). This was the case until roughly a year ago before extraordinary work by Vladislav Zavialov who solved the ambiguity resolution issue by parsing into an abstraction with an overloaded representation:

    class DisambECP b where ...
    newtype ECP = ECP { runECP_PV :: forall b. DisambECP b => PV (Located b) }
    
    This innovation might be considered to have come at a cost for developers familiar with the "old" parser however. That is, dealing with understanding the apparent complexity introduced by the ambiguity resolution system. This post attempts to provide some intuition about how the system works and hopefully will lead to the realization that it's not that hard to understand after all!

    Because this post is about building intuition, there are details that are glossed over or omitted entirely: the reader is encouraged to read Vlad's detailed explanatory comments in RdrHsSyn.hs when neccessary to address that.

    We start with something familiar - the GHC parser monad:

    印军方忧虑中国产军事硬件 要求严查加密装置 - huanqiu.com:2021-1-25 · 出于对中国网络 ... 关注来自中国的军事硬件中的零部件(sub -assemblies),尤其是通讯装备。印度空军及海军都收到命伖,一旦发现自己的系统及 ...

    This fundamentally is a wrapper over a function sub免费网络加速器.

    The (let's call it the) "ECP system" introduces a new (and as we'll see, very related) concept. The parser validator monad:

    newtype PV a = PV { unPV :: PV_Context -> PV_Accum -> PV_Result a }
    

    So a parser validator is a function similar in spirit to a parser where:

    • data PV_Context: The type of essentially a wrapper around the lexer ParserFlags value;
    • data PV_Accum: The type of state accumulated during parsing validation (like errors & warnings , comments, annotations);
    • 安卓网络加速器: The parser validator function's result type that is, 中国5G商用,在开放合作中“提速”_新视听 - jnnc.com:2021-11-19 · 据路透社报道,全球移动通信系统协会GSMA智库发布研究指出,至2021年,中国预计将会有6亿5G用户,在绝对数量上领先全球。.

    Of critical interest is how this type is made a monad.

    instance Functor PV where
      fmap = liftM
    
    instance Applicative PV where
      pure a = a `seq` PV (\_ acc -> PV_Ok acc a)
      (<*>) = ap
    

    The above reveals that an expression like return e where e is of type 安卓网络加速器, constructs a function that given arguments ctx and acc returns e. The moral equivalent of const.

    instance Monad PV where
      m >>= f = PV $ \ctx acc ->
        case unPV m ctx acc of
          PV_Ok acc' a -> unPV (f a) ctx acc'
          PV_Failed acc' -> PV_Failed acc'
    

    The bind operation composes PV actions threading context and accumlators through the application of their contained functions: given an m :: PV a and a function f :: a -> PV b, then m >>= f constructs a PV b that wraps a function that composes f with the function in m.

    PV is a bit more than a monad, it also satisfies the MonadP class for monads that support parsing-related operations providing the ability to query for active language extensions, store warnings, errors, comments and annotations.

    instance MonadP PV where
      addError srcspan msg = ....
        PV $ \ctx acc@PV_Accum{pv_messages=m} ->
          let msg' = msg $$ pv_hint ctx in
          PV_Ok acc{pv_messages=appendError srcspan msg' m} ()
      addWarning option srcspan warning = ...
      addFatalError srcspan msg =...
      getBit ext =
        PV $ \ctx acc ->
          let b = ext `xtest` pExtsBitmap (pv_options ctx) in
          PV_Ok acc $! b
      addAnnotation (RealSrcSpan l _) a (RealSrcSpan v _) = ...
      ...
    

    The function runPV is the interpreter of a PV a. To run a 一sub网络加速器 through this function is to produce a P a.

    runPV :: PV a -> P a
    

    That is, given a 一sub网络加速器 construct a function PState -> ParseResult a.

    runPV m =
      P $ \s ->
        let
          pv_ctx = PV_Context {...} -- init context from parse state 's'
          pv_acc = PV_Accum {...} -- init local state from parse state 's'
          -- Define a function that builds a parse state from local state
          mkPState acc' =
            s { messages = pv_messages acc'
              , annotations = pv_annotations acc'
              , comment_q = pv_comment_q acc'
              , annotations_comments = pv_annotations_comments acc' }
        in
          -- Invoke the function in m with context and state, harvest its revised state and
          -- turn its outcome into a ParseResult.
          case unPV m pv_ctx pv_acc of
            PV_Ok acc' a -> POk (mkPState acc') a
            PV_Failed acc' -> PFailed (mkPState acc')
    

    一加 8 系列新品正式发布 肉眼可见的出类拔萃-千龙网·中国 ...:2021-4-20 · 4月16日,一加举办主题为“肉眼可见的出类拔萃”线上发布会,正式发布一加 8 系列新品,新品系列包含一加 8 和一加 8 Pro两款产品,一加 8 系列 ...

    '(' texp ')'
    

    In the context of a pattern we expect an AST with a ParPat _ p node whereas in the context of an expression we want an AST with an HsPar _ e node. To this end the DisambECP class embodies an abstract set of operations for parse tree construction.

    class DisambECP b where
      ...
    
      -- | Return a command without ambiguity, or fail in a non-command context.
      ecpFromCmd' :: LHsCmd GhcPs -> PV (Located b)
      -- | Return an expression without ambiguity, or fail in a non-expression context.
      ecpFromExp' :: LHsExpr GhcPs -> PV (Located b)
    
      ... Lots of operations like this
      mkHsOpAppPV :: SrcSpan -> Located b -> Located (InfixOp b) -> Located b -> PV (Located b)
      mkHsVarPV :: Located RdrName -> PV (Located b)
    
      ...
    

    The idea is that in the semantic actions of the grammar we construct and compose parser validators in terms of these abstract functions. Running the PVs produces parsers and at the point of execution of parsers we know the context (the nature of the AST we expect to recive) and the concrete choices for each of the abstract functions is thereby fixed (and then, on evaluation, we get the parse result).

    The only wrinkle is in the return type of productions that produce parser validators. In general, they will have the form forall b. DisambECP b => PV (Located b). If they were monadic productions though we would be led to P (forall b. DisambECP b => PV (Located b) and that dog don't hunt for GHC's lack of support for impredicative types. There is a standard work-around that can be employed though. This newtype is how impredicative types in monadic productions are avoided:

    5G主流机型异军突起 骁龙765G堪称制胜法宝_北方号_北方 ...:2021-4-14 · 原标题:5G主流机型异军突起骁龙765G堪称制胜法宝【PChome手机频道报道】2021年的手机市场,5G手机会占据很大的比例,这点我伔从在今年新发布的多款手机中就能看出,高端机型已经全面5G化,而面向大众用户群体的主流型手机中,也开始呈现出这种趋势,尤其是在2021元至4

    So here, ECP is a wrapper around a PV (Located b) value where b can be of any type that satisifies the constraints of class DisamECP. So, in a production that looks like

    | ... {% return (ECP ...)}
    

    we are dealing with P ECP whereas without a newtype we would be dealing with P (forall b. DisambECP b => PV (Located b)).

    Now to produce a P (Located b) from the PV (Located b) in an ECP we have this function:

    runECP_P :: DisambECP b => ECP -> P (Located b)
    runECP_P p = runPV (runECP_PV p)
    

    It takes an ECP value, projects out the parser validator contained therein and "runs" it to produce a function from PState -> ParseResult a (a parser action).

    From the DisabmECP instance for HsExpr GhcPs, here's ecpFromCmd':

      ecpFromCmd' (L l c) = do
        addError l $ vcat
          [ text "Arrow command found where an expression was expected:",
            nest 2 (ppr c) ]
        return (L l hsHoleExpr)
    

    Makes perfect sense - you get a parser validator that when evaluated will store a (non-fatal) error and returns an expression "hole" (unbound variable called _) so that parsing can continue.

    Continuing, the definition of ecpFromExp':

      ecpFromExp' = return
    

    Also sensible. Simply calculate a function that returns its provided acc argument together with the given constant expression under a PV_Ok result (see the definition of pure in the Appliciatve instance for PV given above).

    Parenthesizing an expression for this DisambECP instance means wrapping a HsPar around the given e:

      mkHsParPV l e = return $ L l (HsPar noExtField e)
    

    And so on. You get the idea.

    So how does this all fit together? Consider agin the production of parenthesized things:

            | '(' texp ')'  { ECP $
                                runECP_PV $2 >>= \ $2 ->
                                amms (mkHsParPV (comb2 $1 $>) $2) [mop $1,mcp $3] }
    

    We note that the texp production calculates an ECP. Stripping away for simplicity the annotation and source code location calculations in the semantic action, in essence we are left with this.

    ECP $ runECP_PV $2 >>= \ $2 -> mkHsParPV $2
    

    The effect of runECP_PV is to project out the forall b. DisambECP b => PV (Located b) value from the result of texp. Recalling that sub免费网络加速器官网 projects out the function that the PV wrapper shields and by substition of the definition of bind, we obtain roughly:

      ECP $ PV $ \ctx acc ->
                    case unPV (runECP_PV $2) ctx acc of
                      PV_Ok acc' a -> unPV (mkHsParPV a) ctx acc'
                      PV_Failed acc' -> PV_Failed acc'
    

    The net effet is we construct a new parser validatior (function) from the parser validator (function) returned from the texp production that puts parenthesis around whatever that function when evaluated produces. If used in a context where texp generates a LPat GhcPs that'll be a ParPat node, if an LHsExpr GhcPs, then a sub免费网络加速器官网 node.

    海鸥vp官网-旋风加速度器

    sub免费网络加速器官网

    一sub网络加速器

    海鸥vp官网-旋风加速度器

    In GHC, Haskell operator occurrences get classified into one of four categories. For example, the occurrence of ⊕ in a ⊕ b is "loose infix", in a⊕b is "tight infix", in a ⊕b is "prefix" and in a⊕ b, "suffix"

    The point of this is that certain operators can be ascribed different meanings depending on the classification of their occurrence and language extensions that may be in effect. For example, ! when encountered will lex as strictness annotation (token type ITbang) if its occurrence is prefix (e.g. 全球网络加速器) or an ordinary operator (token type ITvarsym ) if not (e.g. sub免费网络加速器官网). Another ready example is provided by operator @ which, according to whitespace considerations, may be a type application (prefix), an as-pattern (tight infix), an ordinary operator (loose infix) or a parse error (suffix).

    The implementation of this categorization relies upon two functions: followedByOpeningToken and precededByClosingToken. To explain further:

    • Identifiers, literals and opening brackets (, (#, [|, [||, [p|, [t|, { are considered "opening tokens";
    • Identifiers, literals and closing brackets ), #), ], |], } are considered "closing tokens";
    • Other tokens and whitespace are considered neither opening or closing.

    The classification algorithm is defined by the following rules:

    precededByClosingTokenfollowedByOpeningTokenoccurrence
    FalseTrueprefix
    True一sub网络加速器suffix
    TrueTrue网络加速器
    FalseFalseloose infix

    The implementation of precededByClosingToken is very straightforward: look backwards one character in the lexing buffer.
    precededByClosingToken :: AlexAccPred ExtsBitmap
    precededByClosingToken _ (AI _ buf) _ _ =
      case prevChar buf '\n' of
        '}' -> decodePrevNChars 1 buf /= "-"
        ')' -> True
        ']' -> True
        '\"' -> True
        '\'' -> True
        '_' -> True
        c -> isAlphaNum c
    
    Similarly, followedByOpeningToken: look forwards one character in the lexing buffer.
    followedByOpeningToken :: AlexAccPred ExtsBitmap
    followedByOpeningToken _ _ _ (AI _ buf)
      | atEnd buf = False
      | otherwise =
          case nextChar buf of
            ('{', buf') -> nextCharIsNot buf' (== '-')
            ('(', _) -> True
            ('[', _) -> True
            ('\"', _) -> True
            ('\'', _) -> True
            ('_', _) -> True
            (c, _) -> isAlphaNum c
    
    Armed by these rules, the lexing of operators looks like this:
    <0> {
      @varsym / { precededByClosingToken `alexAndPred` followedByOpeningToken } { varsym_tight_infix }
      @varsym / { followedByOpeningToken }  { varsym_prefix }
      @varsym / { precededByClosingToken }  { varsym_suffix }
      @varsym                               { varsym_loose_infix }
    }
    

    The actions varsym_tight_infix, varsym_prefix, varsym_suffix and varsym_loose_infix are "fed" the operator and allow for language extension specific issuance of tokens (as opposed to issuance of general ITvarsym tokens). For example, varsym_prefix :

    varsym_prefix :: Action
    varsym_prefix = sym $ \exts s ->
      if | TypeApplicationsBit `xtest` exts, s == fsLit "@"
         -> return ITtypeApp
         |  ...
         | otherwise -> return (ITvarsym s)
    

    海鸥vp官网-旋风加速度器

    GHC Haskell Pats and LPats

    一sub网络加速器

    In the Trees that Grow paper, it is explained that GHC has a single data type HsSyn that crosses several compiler phases; a second data type TH.Syntax for Template Haskell and that other Haskell libraries e.g. haskell-src-exts defnining yet others. Ideally, HsSyn would be reused in Template Haskell and these third-party libraries and motivates the flexibilities offered by the TTG (Trees That Grow) techniques.

    Before GHC 8.8, patterns and located patterns were related in the following way:

    type LPat = Located Pat
    data Pat p
      = ...
      | LazyPat (XLazyPat p) (LPat p)
      ...
    
    That is, patterns with locations are represented by values of type 全球网络加速器 and patterns themselves as values of type Pat. Note that LPat values contain Pat values which in turn can contain LPat values hence the name "ping pong style" being given to this idiom.

    Since location annotations may (e.g. GHC native) or may not (e.g. Template Haskell) be present for a given application it is realized that "baking" locations into 全球网络加速器 is undesirable. For this reason, in 8.8 attempts were made to make their presence a strictly GHC "thing" in the following way:

    type LPat p = Pat p
    data Pat p
      = ...
      | LazyPat (XLazyPat p) (LPat p)
      | ...
      | XPat (XXPat p)
    type instance XXPat (GhcPass p) = Located (Pat (GhcPass p))
    
    That is, in GHC under this approach, locations are stored in the extension constructor - patterns with locations are wrapped in XPat e.g. XPat noExt (L _ (VarPat noExt _)). Of course, now, to get at the location you have to go through an indirection through XPat. For this, the functions cL and dL (and the bi-directional pattern synonym LL) were provided. Applications that don't want locations in the parse tree just don't make use of the XPat constructor.

    It turned out that the 8.8 approach wasn't as good an idea as it seemed; it was a bit more complicated than it needed to be and had some unexpected implications for the existing GHC source code base. It was realized that this following alternative approach yields the same benefits and is what we find in 8.10 and beyond:

    type family XRec p (f :: * -> *) = r | r -> p f
    type instance XRec (GhcPass p) f = Located (f (GhcPass p))
    
    type LPat p = XRec p Pat
    data Pat p
      = ...
      | LazyPat (XLazyPat p) (LPat p)
      | ...
      | XPat (XXPat p)
    type instance XXPat   (GhcPass _) = NoExtCon
    
    Thus for GHC, ping-pong style is restored and applications other than GHC can define the XRec instance as simply f p so that locations are absent.

    In practical terms, going from 8.8 to 8.10 LL becomes L, 全球网络加速器 is removed and cL is just L.

    海鸥vp官网-旋风加速度器

    Partitions of a set

    一sub网络加速器

    Having "solved" a bunch of these divide & conquer problems, I'm the first to admit to having being lulled into a false sense of security. At first glance, the problem of this post seemed deceptively simple and consequently I struggled with it, sort of "hand-waving", not really engaging my brain and getting more and more frustrated how the dang thing wouldn't yield to my experience! I think the moral of the story is math doesn't care about your previous successes and so don't let your past practice trick you into laziness. Be guided by your experience but fully apply yourself to the problem at hand!

    Suppose a set of two elements {2, 3}. There are only two ways it can be partitioned: (23), (3)(2). For meaning, you might think of these two partitions like this : in the first partition, there is a connection between the elements 2 and 3, in the second, 2 and 3 are isolated from each other.

    Suppose a set of elements {1, 2, 3}. There are five partitions of this set : (123), (23)(1), (13)(2), (3)(21), (3)(2)(1) (I've carefully written them out this way to help with the elucidation). Maybe you want to break here and see if you can write an algorithm for calculating them before reading on?

    Observe that we can get the partitions of {1, 2, 3} from knowledge of the partitions of {2, 3} by looking at each partition of {2, 3} in turn and considering the partitions that would result by inclusion of the element 1. So, for example, the partition (23) gives rise to the partitions 专用网络加速器 and (23)(1). Similarly, the partition (3)(2) gives rise to the partitions (13)(2), (3)(21) and (3)(2)(1). We might characterize this process as computing new partitions of {1, 2, 3} from a partition p of {2, 3} as "extending" p .

    Suppose then we write a function extend x p to capture the above idea. Let's start with the signature of extend. What would it be? Taking (23)(1) as an exemplar, we see that a component of a partition can be represented as [a] and so a partition itself then as [[a]]. We know that extend takes an element and a partition and returns a list of (new) partitions so it must have signature extend :: a -> [[a]] -> [[[a]]] (yes, lists of lists of lists are somehow easy to get confused about).

    Now for writing the body of extend. The base case is the easiest of course - extending the empty partition:

    extend x [] = [[[x]]]
      
    That is, a singleton list of partitions where that one partition has one component. The inductive case is the partition obtained by "pushing" x into the first component of p together with the extensions that leave the first component of p alone.
    extend x (h : tl) = ((x : h) : tl) : map (h :) (extend x tl)
    

    We can now phrase the function partition with signature partition :: [a] -> [[[a]]] like this:

    partition [] = [[]]
    partition (h : tl) = concatMap (extend h) (partition tl)
    
    The base case says, the only partition of the empty set is the the empty partition.

    Wrapping it all up, the algorithm in entirety is

    partition :: [a] -> [[[a]]]
    partition [] = [[]]
    partition (h : tl) = concatMap (extend h) (partition tl)
      where
        extend :: a -> [[a]] -> [[[a]]]
        extend x [] = [[[x]]]
        extend x (h : tl) = ((x : h) : tl) : map (h :) (extend x tl)
    

    海鸥vp官网-旋风加速度器

    Build GHC with stack and hadrian

    一sub网络加速器

    By far the easiest way I know of to get a build of GHC is via the tools 'stack' and 'hadrian'*. The procedures below set out commands that I know first hand work** with machines provisioned by the CI systems Azure, Travis and Appveyor.

    sub免费网络加速器

    • Ubuntu:
      curl -sSL http://get.haskellstack.org/ | sh
      stack setup
      
    • macOS:
      /usr/bin/ruby -e \
        "$(curl -fsSL http://raw.githubusercontent.com/Homebrew/install/master/install)"
      brew install autoconf automake gmp
      curl -sSL http://get.haskellstack.org/ | sh
      stack setup
      
    • Windows:
      “5G+折叠屏”燃爆 手机厂商MWC上各出新招_中国江苏网 ...:2021-2-28 · 当地时间2月25日,世界移动通信大会( MWC )在西班牙巴塞罗那开幕。华为、中兴、三星等手机厂商在会上展示了最新款的5G及折叠屏手机。媒体记者注意到, MWC期间,多家公司也携自家的折叠屏产品出现,誓与三星等比高。

    安卓网络加速器

    • Ubuntu & macOS:
      git clone --recursive http://gitlab.haskell.org/ghc/ghc.git
      cd ghc
      hadrian/build.stack.sh --configure --flavour=quickest -j
      
    • Windows:
      git clone --recursive http://gitlab.haskell.org/ghc/ghc.git
      cd ghc
      hadrian/build.stack.bat --configure --flavour=quickest -j
      


    [*] The simplicitly and uniformity of these commands make me an advocate of these tools and in particular, the hadrian 全球网络加速器 flag.

    [**] Well, that is to say mostly work. The above is the ideal and has worked me for me reliably for the last year. Recently though, for one reason or another, there seem to have been a lot of breakages. Your mileage may vary.

    海鸥vp官网-旋风加速度器

    性能才是王道!五款流行高清播放器横评--IT频道:2021-4-29 · 大家好!这里是由CGTV和MTU联合举办的2021~2021年度最具实力高清播放器颁奖盛典,下面将时间交给我伔的主持人和五款由网友投票选出的候选高清播放器,有请当红时尚女主播——中S!

    一sub网络加速器

    My last post on parsing in the presence of dynamic pragmas left us with this outline for calling the GHC parser.

          flags <-
            parsePragmasIntoDynFlags
              (defaultDynFlags fakeSettings fakeLlvmConfig) file s
          whenJust flags $ \flags ->
             case parse file flags s of
                PFailed s ->
                  report flags $ snd (getMessages s flags)
                POk s m -> do
                  let (wrns, errs) = getMessages s flags
                  report flags wrns
                  report flags errs
                  when (null errs) $ analyzeModule flags m
    

    Now, it's a fact that you'll not find in a GHC parse tree certain things like comments and the location of keywords (e.g. let, in and so on). Certainly, if you're writing refactoring tools (think programs like Neil Mitchell's awesome hlint for example), access to these things is critical!

    So, how does one go about getting these program "annotations"? You guessed it... there's an API for that.

    If we assume the existence of a function analyzeModule :: DynFlags -> Located (HsModule GhcPs) -> ApiAnns -> IO () then, here's the gist of the code that exercises it:

    联发科技与诺基亚AirScale 5G基站成功完成预商用测试:2021-2-21 · 原标题:联发科技与诺基亚AirScale5G基站成功完成预商用测试【CNMO新闻】5G是2021年绝对的话题。就在5G网络预商用的这一年,各种有关5G网络的部署都在不断进行。日前,联发科技公开宣布,其5G调制解调器芯片HelioM70完成和诺基亚AirScale5G基站
    Here 安卓网络加速器 is defined as
        harvestAnns pst =
          ( Map.fromListWith (++) $ annotations pst
          , Map.fromList ((noSrcSpan, comment_q pst) : annotations_comments pst)
          )
    

    The type ApiAnns is a pair of maps : the first map contains keyword and punctuation locations, the second maps locations of comments to their values.

    You might think that's the end of this story but there's one twist left : the GHC lexer won't harvest comments by default - you have to tell it to do so by means of the Opt_KeepRawTokenStream (general) flag (see the GHC wiki for details)!

    Taking the above into account, to parse with comments, the outline now becomes:

          flags <-
            parsePragmasIntoDynFlags
              (defaultDynFlags fakeSettings fakeLlvmConfig) file s
          whenJust flags $ \flags ->
             case parse file (flags `gopt_set` Opt_KeepRawTokenStream)s of
                PFailed s ->
                  report flags $ snd (getMessages s flags)
                POk s m -> do
                  let (wrns, errs) = getMessages s flags
                  report flags wrns
                  report flags errs
                  when (null errs) $ analyzeModule flags m (harvestAnns s)
    

    For a complete program demonstrating all of this see this example in the ghc-lib repo.

    海鸥vp官网-旋风加速度器

    Have GHC parsing respect dynamic pragmas

    一sub网络加速器

    This post about Handling GHC parse errors shows that using sub网络加速 in postpostive position is a syntax error unless the ImportQualifiedPost language extension is enabled. In that post, it is explained that the program

    module M where
    import Data.List qualified
    
    is invalid whereas,
    {#- LANGUAGE ImportQualifiedPost -#}
    module M where
    import Data.List qualified
    
    which enables the extension via a "dynamic pragma", is legit.

    Perhaps surprisingly, running the second of these programs through the parsing code presented in that post continues to generate the error

         Found `qualified' in postpositive position.
         To allow this, enable language extension 'ImportQualifiedPost'
    
    Evidently, our parse-fu needs an upgrade to respect dynamic pragmas and that's what this post provides.

    This code exercises the GHC API to parse a module.

    伔尔2飞机在这里重新起飞——2021华为开发者大会综述 ...:2021-8-21 · 近日,华为创始人任正非在向CBG移交“千疮百孔的烂伔尔2飞机”战旗交接伒式上发表了讲话。任正非把华为终端业务比作那架千疮百孔的飞机华为 ...

    Note in the above, the second argument flags :: DynFlags. In order for parse to take into account extensions enabled by pragmas in the source argument str, then flags must be set up to do so a priori. That is, before jumping into parse, a "first pass" must be made to sniff out flags. There is a GHC API for that. It's called parseDynamicFilePragma.

    Here's a function to harvest flags from pragmas that makes that call to parseDynamicFilePragma.

    parsePragmasIntoDynFlags :: DynFlags -> FilePath -> String -> IO (Maybe DynFlags)
    parsePragmasIntoDynFlags flags filepath str =
      catchErrors $ do
        let opts = getOptions flags (stringToStringBuffer str) filepath
        (flags, _, _) <- parseDynamicFilePragma flags opts
        return $ Just flags
      where
        catchErrors :: IO (Maybe DynFlags) -> IO (Maybe DynFlags)
        catchErrors act = handleGhcException reportErr
                            (handleSourceError reportErr act)
        reportErr e = do putStrLn $ "error : " ++ show e; return Nothing
    
    The main contribution of this function is to account for the complication that parseDynamicFilePragma can throw two kinds of exceptions : GhcException and SourceError. The GHC API functions handleGhcException and handleSourceError are the means to achieve that.

    Putting it all together then, here's an outline of how to parse in the presence of dynamic pragmas.

          s <- readFile' file
          flags <-
            parsePragmasIntoDynFlags
              (defaultDynFlags fakeSettings fakeLlvmConfig) file s
          whenJust flags $ \flags ->
             case parse file flags s of
                PFailed s ->
                  report flags $ snd (getMessages s flags)
                POk s m -> do
                  let (wrns, errs) = getMessages s flags
                  report flags wrns
                  report flags errs
                  when (null errs) $ analyzeModule flags m
    
    For a complete working program that utilizes this function, see this example in the ghc-lib repo.

    Older Posts Home